piercer

Registered Users
  • Content count

    548
  • Joined

  • Last visited

Everything posted by piercer

  1. Oh, a note to add on to this. If you're pushing your RAM clock up, keep in mind you have no way of knowing how hot the RAM chips are getting and they tend to be the #1 part the fails when people overclock. The Nvidia temps are reporting GPU temps not RAM temps, that's true for ATI boards as well. I'd recommend getting a GPU cooler and some RAM heat syncs. These will help keep the board at more optimal temperatures. P-
  2. Yea, I'm running a 4200+ dual-core at 2.8Ghz -- sure I might not get the full bandwidth from the board, but I can turn around and put AA/AF a lot higher. My current 7900GT (overclocked to almost GTX speeds) is awesome, but because I run a 24" Samsung 244T at 1920x1200 my framerates suffer and I can't push the AA/AF up at all. If I was running at 1280x1024, I probably wouldn't think about getting the 8800GTX in the near term. P-
  3. November 8th from what I hear -- It would be wise to wait. Lots of speculation, but the G80 boards ($600+ for GTX) will be very very nice and should outperform the 7950 GX2. If you can't afford one, that's ok, as it may mean a reduction in current board prices. Can't wait :-D as I'm going to buy a GTX P-
  4. "Detect Optimal Settings" in the nvidia control panel tests the card at increasingly higher rates until it experiences an anomaly/artifact then cuts back. The problem with this method is that the anomaly doesn't happen at full-load, but at basically idle temperatures. Best thing to do is cut back about 5% or so from the optimial settings. My card was 450MHz base [it's a 7900GT], I modded the vcore and overclocked it. In optimal settings it says 670 / 760 (1520DDR) -- I run it at 640/725, but this is also after I tested it overnight using ATI Tool's artifact checker (It's a fuzzy cube thing that spins super-fast -- it doesn't look like it is, but it is) If you can run the 3D artifact chcker overnight (like 8-10 hours) without any errors, then you should be fine. But, before you run it overnight I'd give it a test spin (no pun intended) to see what happens to the GPU temperature as the checker will produce a full-load situation for the board. If the GPU temperature gets to excessive you can destroy your card -- you should be able to tell how high the temps are going in a short period of time by running the ATI Tool and watching the temperature monitor of the board under the control panel. Sometimes the errors are 'visual' (you can see them) -- the fuzzy cube usually gets these yellow dots -- that's bad -- means the clocks are to high and are generating errors. So, like the previous poster said, don't trust 'Optimal Settings' as 99% of the time the optimal settings will generate artifacts. P-
  5. My only comment on this would be to wait about a week. NVidia is announcing the 8800GTS and GTX on November 8th if I recall the date porperly. This might mean a reduction in 79xxx boards. P-
  6. A brief history of GL ;-) The GL 'Graphics Library' was first created at Silicon Graphics in the late 80's through the early 90's -- pioneered by Jim Clark a Professor at Stanford who founded SGI (along with 6 other co-horts) he later moved on to found Netscape). At that time they were proprietary graphics libraries targeted at high-end workstations and super-graphics computers, competing with othe graphics system like those of Evans and Sutherland. A little known factoid, is that in the 1991-1992 era SGI actually manufactured 3D graphics boards for PC's called 'Irisvision' -- They were predominately used for Autocad and other high-end PC CAD systems. After lackluster sales and internal conflict they were discontinued. This would later haunt the company as one of the biggest blunders of its meteoric history. Somewhere around the mid to late 90's (I just don't recall) OpenGL was authored and pushed out into the market as a way to create a 'standard' for 3D graphics libraries. OpenGL was met with much success, so much success that companies like Nvidia were started to capitalize on it by developing chipsets specifically to accelerate and optimize the performance of the libraries for PC's. Recognizing that there is only 1 company that is allowed to create standards in the industry, Microsoft, quickly designed and pushed out DirectX (there is a lot of sarcasm in that last sentence). Like all initial MS products it was a POS and took awhile before it could actually do anything even similar to the OpenGL libraries. In a big 'uh oh' SGI realized much to late that they had given away the keys to the kingdom as high-volume low-cost PCs started proliferate with 3D OpenGL optimized graphics boards and began eating away at the companies low-end business. In the late 90's SGI crumbled, to too many bad decisions. * SGI declared bankruptcy in early 2006 * Google now resides in SGI's old HQ in the Silicon Valley The end :-D What did I win Alex?
  7. Yea Xenon -- I did that already -- I know a lot of people overlook that since it's not the default setting, but that's what's make this such a pain of a problem. I'll post my dx when I get back from my business trip. I see other folks seem to be having a similar problem as well. P-
  8. I'm not really sure what the problem is with my system, I've done all of the following, but still as Infantry I seem to get the Crash-Reboot: 1) Reinstalled BE (As well as XP and all the drivers to the latest) 2) Reinstalled Nvidia Drivers (84.21) 3) Tested PSU under full load (11.98 / 5.05 / 3.32 ) 4) Ran prime95 on both CPUs for 24 hours (no errors) 5) Ran rthdribl and atitools artifact checker (the fuzzy cube) -- no errors 6) Added a .05v to memory and .5v to CPU to ensure stability 7) Underclocked the CPU 8) re-did my paging files And, after 45 minutes, CRASH-REBOOT, no error logged. And again, this only started after the latest set of patches. What's interesting, is that when I fly I haven't had this happen. When I'm playing infantry is when it happens. And, I can't seem to replicate the exact circumstances. The only thing that comes to mind are, maybe an audio bug, but I swaped out my audio boards and have the motherboard audio disabled in bios -- still same problem. This seems to only leave a few potential possibilities that one of the drivers, maybe the NVIDIA driver has some conflict with 1.24.x and the driver is commiting a kernel fault that's too catastrophic to dump the error. I wouldn't think it's the application 'per se' as that shouldn't cause such a crash-reboot problem. But, maybe it's something the application is 'calling' -- maybe in the texture area....will keep checking it out. P
  9. I did that on my system already -- matter of fact I re-installed my whole XP system just over the weekend to alleviate any OS or install issues (I keep a backed-up copy of XP in a virgin state so I can reinstall quickly). If this is happening to you, read the following, it might help you figure out where the problem is as well. For me it happens when I'm infantry and I'm running through the new bushes, at least, that's where I notice it happening the most. I wasn't having this problem two weeks ago. But since the patch, I'm really CTR'ing to be exact (Crash to Reboot) What's even tricker, no dumps, no log entries, no error log out of WWII. At present I'm testing all my hardware to make sure it's not a CPU/MEM/PSU/MB/GPU type problem. I basically have everything overclocked so I'm willing to consider it's my own doing so I'm running memtest, prime95, rthdribl and I'm multimetering my PSU under full load. What's odd, is that I've been running at these speeds for over 9 months with the game and suddenly at 1.24.x I'm getting these crash-reboots. My memory should run at 250Mhz (500DDR) at 2.5-3-3-8 latencies. Right now I have tighter (2-3-3-7) timings, but a much lower clock 215 and memtest ran 12 hours without a hitch -- I also ran the game at 2.5-3-3-8 last night and after about an hour it crash-rebooted (I have the reboot option set to not-reboot, but somehow the NT kernel isn't catching it and dumping properly). Right now I'm running Prime95 for the next 12 hours ( You have to remember to run 2 instances, one for each core of your CPU) -- To see if the core is failing at it's present 2.7GHz rate (AMD 4200 x2 OC'd from 2.2GHz). PSU on the multimeter is within ranges under full load (2 Prime95's + rthdribl), only a .3 fluctuation on the 12v rail (11.73) average, which is in acceptable ranges so this leads me to either CPU/GPU or a software issue. Will let ya know as I learn more. P-
  10. Actually, check this. Because this has been happening to me as well. I can't figure out what it is, it just started recently. Next time it crashes go here: Start->Control Panel->Administrative Tools->Event Viewer Click on the left hand side where it says "System" -- look at the entries and tell me if you see one marked with an Information icon, very close to the time right before you crashed that says: The WMI Performance Adapter service was successfully sent a start control. Tell me if you have this. It seems that it starts and less than a minute later my screen goes black and the machine reboots. It's been a mystery to me so I've disabled the WMI performance service to see if I get any more black-screens-of-death. I just did this yesterday, so I'll give you an update if something changes. P-
  11. If you want to spend some good $$, get the Samsung 244T Widescreen (1920x1200) -- It's awesome! Best monitor I've ever purchased. My only recommendation though, if you're looking for higher res you're also going to need a video board that can keep up the high-res output. P-
  12. Is that true? About widescreen? I have a 24" Samsung 244T -- Thing is completely awesome as far as an LCD monitor. It's my 2nd (at home) -- and I think it might just be the best monitor I've ever purchased. But, I went in and edited my settings to 1920x1200x32 (keep in mind you need a graphics board that'll drive that resolution) -- The 3d aspect ratio doesn't appear skewed to me. The widgets (compass, text, etcc...) seem to get aspect skewed slightly, but not the 3d space. To bad I don't have my older monitor -- I could compare them. It seems though with the 1920x1200 like I'm seeing more than I did under non-widescreen. P-
  13. Hey -- I've been reading about the advantages of the AMD Optimizer. I installed it the other day (Have 4200 X2 - OC'd 2.7Ghz) but now, instead of better frame rates I get this wierd stutter problem. I basically am running (30-40fps) in a busy town, and then I get this really laggy stutter for about 2 seconds then it goes away for a little while. Seems to happen whenever I run up to a building. Wanted to know if anyone else has experienced this? I had no stutters prior, and when I uninstalled it (along with the AMD drivers) then re-installed just the AMD drivers the wierd stutter/lag went away. Anyone else experience this with the AMD optimizer? Would love to get better FPS with it, but it seems to have a negative effect on my system. P-
  14. What are you using exactly: CH Pro Pedals CH Yoke CH Fighterstick/combatstick?? CH Pro Throttle?? There is an issue that SgtSpoon has done some in-depth looking into where the game has a problem (or windows, not to point a finger anywhere) recognizing sometimes multiple USB HID devices. This can be caused by a USB Keyboard driver, or, as in my case my G5 Logitech Mouse started the problem. To get around this, I used the CH CMS software to create a single virtual controller and now I don't have an issue. The CMS software can be a little confusing, so I suggest going to www.ch-hangar.com where they have fairly in-depth help in the forums. Another way to check if it's a USB problem, at least this works for me, is unplug the usb keyboard, run the game, then when you get to the map screen plug your keyboard back in, see if this changes how your controllers are seen. P-
  15. I remember when I first started playing. I was getting killed left and right, oh wait, that still happens. Anyway, what you begin to realize is this game requires you to 'think', not just shoot. When I first started playing I was always driving tanks then I started playing infantry, it's now my favorite unit along with flying. Playing infantry in this game means you have to plan out where you're going, what path to take, where and when to hide, etc.. This makes the game more immersive, sometimes when playing a sapper my palms begin to sweat as I'm sneaking up to place a satchel on an enemy tank, because it took a lot of path planning to get up on him. Now that's a rush you can't get from a bf2 or the standard FPS of choice. What I love about the game, it forces me to learn and understand things that I didn't realize before. When you start flying, you can't avoid but getting a good history on the me109 to understand why fighter tactics of axis are so much different than allied. That's true of all the equipment throughout the game. If you want a game that makes you think and gives you the 'rush' of combat, than this is the game -- if you're just looking for frag points, headshots, and double-kills, then it's not. P-
  16. I wonder if FRAPS would catch it. At least then we'd all be on the same page. I mean, the trees flicker for me, but I've always thought was part of the movement and the leaves rendering. P-
  17. Well, then yea, I have one of those ;-) Being that there is no XML API (Though I really wish there was), I decided to design and build it all from scratch. What it does, it's all in OO-Perl BTW and automated -- using cron: A) Points to a squads squad list web page Parses that page to know the latest members and expulsions, rank changes, company changes (if your squad does that) C) Marks those changes in the 'change log' D) Ummmm...borrows the stats once a night around 2am PST E) Updates those that need updating F) Appends the changes to the history file If a squad members last activity date (from the main CSR page) hasn't changed it skips that entry (lessen the HTTP load and requests). Everything else is basically perl scripts that serve up the stored info to page requests. It keeps a full history so you can look back a bunch of campaigns to see if you're doing better or worse ;-) As you can see they've been around for about 1 1/2 years, pretty bug free at this point in time. Drop me a note if you want to know more: piercer@pacbell.net P-
  18. Are you asking for a web page that compiles stats for a squad? Something like, maybe: http://www.katmaicube.com/cgi-bin/wwol/stat_index.pl?id=91st P-
  19. Yup, I have a PNY 7900GT, bought it for $289 US, OC'd it to 650/1700 -- It's now the same as a GTX, same core, etc... But, I did need to put a new cooler on it, as the stock one was made for lower temps. P-
  20. Yes, that's a very good question. You're running a 4200, at what clock speed? 2.2GHz? what's the FSB/HT Multiplier set at? Seems we've almost exhausted everything....and noticing the difference between 3DMark06 and 3DMark05, leaves me to think that the only plausible thing left might just be the CPU (WWIIOL is very CPU intensive as well). You could be in a position where the CPU is underclocked and can't keep up with the GPU. Why I say this, is that 3DMark06 should run about 1FPS on the CPU benchmark for most machines, so it has negligiable impact to the total score. 3DMark05 on the other hand can have CPU tests that hit 10-20fps, so a much more significant impact of CPU on the total scores, hence if 06 and 05 are very close, like you're showing, it's points a strong finger towards the CPU. Go google and download CPUZ and run that on your system. It'll pop up a window with a few tabs that will tell you everything about what your CPU is doing and memory timings. Yes CNQ can cause problems, it's buggy and there are a lot of motherboards that have had issues with it especially for gamers. It's goal is to lower voltage and clock speed when the computer down't need them to reduce power requirements and lower fan speeds. P-
  21. Yea, I'd say that the vid board fried somehow. Do what the first poster suggested. Check your system temperatures. This is becoming an increasing issue for folks, especially with all the new CPU and GPU cores being used these days they kick a lot more heat out than in the past, and people don't realize how the internal temperatures of their systems might be increasing. The 7900GTX boards use almost 100W a piece, ever tried to unscrew a 100W lightbulb after it's been on for a few hours? They require good ventilation. And, no, resolution settings shouldn't have messed it up at all. P-
  22. Wow, 7 80mm system fans, you could fry an egg on your system -- Wha'ts your electic bill like? Does your family wear earplugs? Tha'ts an awful lot of air pushing around. I say this because most folks don't realize that sometimes too many fans can actually create internal case air turbulence that actually defeats the intentions of the fans. I speak from experience on this because I just went through some major overclocking on my system and had a heck of time controlling the system temperatures. Only to realize that less was more -- I went from 5 fans (3 80mm / 2 120mm) down to 3 (2 120mm / 1 80mm) and experienced a 10C drop in temperatures. (this doesn't include the 1 on the CPU, 1 on the GPU, and the one in the PSU) What I realized after driving my wife crazy between fan noise and me continually ripping open the comptuer case every 2 hours was that I had way to much positive air pressure (That's more air going into the case than coming out). Now I have negative air pressure, only 1 120mm fan blowing into the case (lower side of case). the other 2 fans (1 120mm / 1 80mm) are exhausts (top and rear of case) -- by reconfiguring I went from 46C (22C above ambient) to 34C (12C above ambient [different ambients, I don't have central air so I can't control the ambient temperature]). BTW -- Here are my CPU specs: 4200+ (2.2GHz base) -- > OC"d 2.67GHz -- 1.375v ~34C Idle (187MHz mem clock 2-3-3-6 timings, 4x512 Corsair 3200XLPT -- 442MHz OC rate) Overclocking is doable on your chip -- I'd recommend going to overclock.net and reading as much as possible before you do it if you want to try. Expect to take a good 2 days to get it tweaked to your liking and make sure your throroughly test out the overlcok with Prime95 and memtest so you don't have any unexpected crashes. Also, pay attention to locking the PCI bus and see if your mobo supports it and how -- the last thing you want is an overclock that is unstable and writes bad stuff to your HD. P-
  23. I have 2 controllers that I think work together perfectly. You could go out and buy a fancy Saitek X52, but from my experience so far this combo works well for me. Left Hand: Belkin Nostromo N52 I use this instead of the keyboard. I've had this for almost 2 years now. Can't part with it, if I could change anything on it, it would be to have a slider instead of a mouse-wheel for throttle control -- keymapping the wheel to throttle doesn't cut it for me. But it's good for swapping weapons. It's fully configurable. And within a very shot period of time you'll notice you can keymap about 25 different keys or more with minimal effort or finger reach. I map the precision aim to the keys on the Nostromo (I think this is talked about on Spoon's site). Right Hand: Saitek EVO I bought the EVO after reading a bunch of posts here. I've had it for about 1 1/2 years. Great stick, no complaints. I don't use any of the base buttons they are in what I consider a non-reachable position. I have two complaints with most sticks -- 1) is the stick 'pressure' -- I wish it could be adjustable or a little tighter so I don't get herky-jerky when things heat up. Most sticks use a spring to emulate 'pressure', but it always seems looser than it should be. 2) Twist rudder -- whoever invented this should be drawn and quartered. Maybe it works ok on some cheapo flight sim, but when it comes down to combat flight the last thing you want is a twist rudder as you're flip flopping all over the place. Inevitably you'll make the mistake as the tension mounts to accidently twist the rudder as you are maneauvering which can only mean a result you didn't intend (read lawn-dart). After experiencing this a few times I went and bought foot pedals. The physical configurability of the stick is nice to adjust it to your own hand. It takes a little practice, but not much, to get comfortable driving with a stick over a steering wheel. My only other issue is the one I mentioned with the N52, which is throttle control. The Saitek EVO throttle is at the base like most sticks. I always find this awkward. I think when you fly, or drive (anything that requires throttle) -- you shouldn't have to look over, or feel you have to search for the throttle, I don't like taking my hand off the N52 to adjust the throttle. P-
  24. Yea, 500w should be good, I run the 4200 with the 7900GT at GTX speeds with 2 disk drives, floppy, DVD, sound card, and 2GB with a 480W PSU -- I would suggest you buy an Antec, as they tend to be the one of the best, you get good supply to multiple rails. Worse case, it doesn't solve your problem, but at least you're running your system on an adequate PSU vs. what you have now, which I would say is about 150-200w under what you need. P-
  25. OK, we might be making some progress here. First off, 300W is not very much power. Matter of fact, look at this card: http://www.bfgtech.com/7900GTX_512_PCIX.html All nvidia manufacturers have to build to the design specification, so your Asus card isn't really that different from the BFG card. BFG sometimes does a few tweaks to beef up the cards, but the power requirements are similar. They recommend at minimum 400W PSU, and if you keep reading, when using 2 cards 500W PSU, so each card might suck up around 100W. Here's something you can do to see if it's your power supply. You can download one of the many system monitoring programs, or sometimes new motherboards come with an application to do this. Most of these have a voltage reader for the CPU and the PSU and the output of 12V 3V 5V, etc... What you want to do is see how stable your voltage stays when the card is being used. Minor swings in voltage are normal around .06v, like my CPU swings from 1.375 to 1.41 -- no biggy. But, if you run the card and see some major voltage swings you might be suckin the life out of your PSU. On the recommendation about voltage to the graphics card, I've never seen a bios that specifically measures gfx slot voltage. Most of the voltages in bios are references to RAM, CPU core, and PCI, but not specifically a read of the voltage going to a specific slot. Memory timing would lock the machine, you can always run Prime95 to test your memory ad nauseam. Now, you asked a question on why it would work, and now it might not work and how could that be a faulty card. Which leads me to another question, how hot does your system get?? What's your CPU temperature range, and what are the temperatures the board is reaching under the nvidia control panel? The 7900 series cards have a power throttling chip to protect the circuitry of the board in case temperatures get too high or voltage is inadquate or to high. In some 7900 boards there is a faulty BIOS that prevents the 'regulator' from resetting properly. You can do a google search on 7900GTX and Thermal Throttling to get an idea of that particular problem. Given all that you've mentioned so far, I'm leaning towards not enough PSU power that's making the board throttle or an overheating case that's doing the same. P-