Jump to content

Question

Posted

Hi guys, not sure if you remember me. I disappeared due to work and real life, but here I am again and I have a question for you. 

 

I had been running STEP on my computer with these current parts:

[CORSAIR HX Series HX750 750W Power Supply]

[GIGABYTE Super Overclock Series GeForce GTX 470 1280MB x 2 in SLI]

[intel Core i7-930 2.8GHz]

[Antec Nine Hundred Black Steel ATX Mid Tower Computer Case with Upgraded USB 3.0]

[Western Digital WD Black 1TB 7200 RPM Internal Hard Drive]

[G.SKILL 6GB 3 x 2GB 240-Pin DDR3 SDRAM DDR3 1600 PC3 12800]

[ASUS P6T LGA 1366 Intel X58 ATX Intel Motherboard]

[ARCTIC Freezer 7 Pro Rev. 2, CPU Cooler 92mm PWM Fan]

120G Intel SSD Hard Drive

 

And I wanted to upgrade a bit to try to run extreme STEP. My question is this, can I just buy the new GTX 770 or do I need to replace the processor, board, and ram. My budget isn't unlimited so I'd rather just spend it all on the graphics card, but I was hoping for some input. I have built my pc before, but I am not great at figuring out what I really should go with. Any help is appreciated. 

 

PS. Looks like you guys have put together a terrific set up here!

  • Answers 117
  • Created
  • Last Reply

Top Posters For This Question

Recommended Posts

  • 0
Posted

I was a fan of nVidia for GPU's and AMD for CPU's. Few years ago I switched from nVidia to AMD card and I was very disappointed I must say. Perfromance and comatibility... painful. I can't remember when was it, a few years ago now, so it may not be relevant now. When that AMD card became a bit old I switched back to nVidia and I was a happy bunny again.

 

There is this stereotype that nVidia is for gaming and AMD is for processing - and I can see where this came from.

  • 0
Posted

What exactly is CUDA?

An API for using the GPU for rendering and computation instead of the CPU... basically. 

 

Just do a quick search for CUDA on google and you will get a ton of hits that explains it in huge detail. 

  • 0
Posted

What exactly is CUDA?

An API for using the GPU for rendering and computation instead of the CPU... basically. 

 

Just do a quick search for CUDA on google and you will get a ton of hits that explains it in huge detail. 

Will do. I'm very happy with my 7970 ghz but my next card might need to be nvidia. I've also decided to stick with single gpus and not go for SLI/Crossfire ever again because of the pain-in-the-ass compatibility issues/microstutter/extra power and heat, etc. 
  • 0
Posted

What exactly is CUDA?

A framework by NVIDIA for using the GPU for things other than graphics. AMD cards can do it too, but more distributed computing projects use CUDA than AMD OpenCL.

 

OT: Anyone for a STEP team on BOINC?

  • 0
Posted

I've got a couple of rigs, one using an Nvidia GTX 580, the other a Radeon HD 7970. People like to exaggerate problems with ATI cards. Granted, Nvidia Inspector is more user-friendly than Radeon Pro, same goes for the equivalent control panels, but the differences are often to aesthetics. I take it a lot of users just don't know how to configure them optimally. Catalyst drivers have been more solid since 12.11 than GeForce drivers in the past few months.

 

I don't disagree with ENB being a bit more efficient on Nvidia cards, but then again, AMD has a lead at mainstream and lower end, and the difference in ENB's performance will be neglected by simply having a faster card.

 

Whilst VRAM might come into consideration when choosing similarly speced cards, I think it's ridiculous to think that your hypothetical GTX 660 would have enough power to utilise 3GB of video memory efficiently. It's a waste of money from that point of view. Another thing is, how did they implement 3GB VRAM on a card with 192-bit bus? Seems like you don't mind having a card with limited bandwidth.

 

Getting that card is a bad idea. But sure the peeps on this forum know better.

 

Edit: DoYouEvenModBro, I'd love to see that microstutter on your single-GPU card, bro.

  • 0
Posted

I've got a couple of rigs, one using an Nvidia GTX 580, the other a Radeon HD 7970. People like to exaggerate problems with ATI cards. Granted, Nvidia Inspector is more user-friendly than Radeon Pro, same goes for the equivalent control panels, but the differences are often to aesthetics. I take it a lot of users just don't know how to configure them optimally. Catalyst drivers have been more solid since 12.11 than GeForce drivers in the past few months.

 

I don't disagree with ENB being a bit more efficient on Nvidia cards, but then again, AMD has a lead at mainstream and lower end, and the difference in ENB's performance will be neglected by simply having a faster card.

 

Whilst VRAM might come into consideration when choosing similarly speced cards, I think it's ridiculous to think that your hypothetical GTX 660 would have enough power to utilise 3GB of video memory efficiently. It's a waste of money from that point of view. Another thing is, how did they implement 3GB VRAM on a card with 192-bit bus? Seems like you don't mind having a card with limited bandwidth.

 

Getting that card is a bad idea. But sure the peeps on this forum know better.

 

Edit: DoYouEvenModBro, I'd love to see that microstutter on your single-GPU card, bro.

I don't even use Radeon PRO. I use Catalyst Control Center. Seems to give me all the driver options I need although sometimes CCC sucks with forcing AA/Super Sampling. I don't understand your last line. 
  • 0
Posted

Correct me if I'm wrong' date=' but to me it seems that in a comparision between the 660 and the Radeon 7870 (which is about the same price), in most cases the 660 matches or even beats the Radeon by a few frames despite having a smaller memory bus than the Radeon. Doesn't seem like to me that the Radeon is worth it when you have the 660. However, the smaller memory bus does limit AF/AA, but I can compensate for that.

You're not even comparing the right cards. I was talking about the Radeon 7870 XT, I even linked a handful a reviews of that card for you in one of my previous posts you seem to have omitted.

 

And find benchmarks that are newer than 6 months.

  • 0
Posted

People like to exaggerate problems with ATI cards.

That is true! I do not feel we have gone into a fanboy contest here yet, and I hope we do not ever get there... it is such a waste of time. 

That said there also are issues with Nvidia. Like the latest beta version and ENB showed... alot of people upgraded and suddenly everyone had the sun through every building! 

Driver issues happen for both companies, and AMD have gotten better over the years, but they still have a horrible reputation to get past. Granted a large part of this is due to some of the major AAA titles have been more optimized for Nvidia cards, and hence produced better results in benchmarks. 

 

but the differences are often to aesthetics

Aesthetics are part of it no doubt. However there are also large differences in what the cards support. I mention CUDA since it is the most obvious one where Nvidia is still far ahead of AMD. Again mostly because they where out quicker, and hence most people naturally started to use it.

PhysX is the next largest difference, and games made for PhysX will ofc. have more effects at higher framerates then AMD can provide since they cannot use the technology. 

I don't disagree with ENB being a bit more efficient on Nvidia cards, but then again, AMD has a lead at mainstream and lower end, and the difference in ENB's performance will be neglected by simply having a faster card.

It is not just performance wise that Nvidia is better for ENB. It is also largely in terms of stability and weird bugs etc. There are again subtle differences at the driver level that can have something work on Nvidia and not on AMD, but of course this also goes the other way around! Boris is afterall just one guy who does this in his spare time. And if he develops on an Nvidia card, then some AMD related issues slip through obviously. 

Whilst VRAM might come into consideration when choosing similarly speced cards, I think it's ridiculous to think that your hypothetical GTX 660 would have enough power to utilise 3GB of video memory efficiently. It's a waste of money from that point of view. Another thing is, how did they implement 3GB VRAM on a card with 192-bit bus? Seems like you don't mind having a card with limited bandwidth.

Not sure what you mean by this to be honest. The reason they put the extra Gb on the card is just a marketing stunt I imagine. Higher numbers always look better after all. 

As for it not having enough power to utilise it... not sure what you mean here. I have not seen a computer based on any of the more modern cards that have the bus speed of the GFX card being a bottleneck... at least not in games. There CPU, RAM, HDD etc. will all cause a bottleneck much earlier. 

So I guess the reverse question is also relevant. Why do you need a card with such a high bandwidth when it is almost never the cause of bottlenecks ? 

Getting that card is a bad idea. But sure the peeps on this forum know better.

Again my point is only to get the card that suits your needs. And not a general "Nvidia is always better then AMD" since that is just not true. 

If you need CUDA, PhysX etc. then there is not even a choice in the matter sadly. And in terms of cost/performance ratio the 660 GTX 3Gb is the best one Nvidia has to offer. 

Sorry if you felt that I advocated that people just get Nvidia because they are so much better etc! That was not my intention! 

  • 0
Posted

CUDA is arguably worse than OpenCL that can run on all platforms. Its usage in certain applications is worth noting, but most people will have no use for it. Likewise for PhysX, there are better alternatives in terms of technology, but PhysX has more money behind it at the moment. Yet it's usage is limited to a handful of titles when it comes to actual visual improvements. The part about more effects at higher framerates is not necessarily true. A lot of PhysX enabled games cripple the performance, or at least used to in the past.

 

I can't really comment on ENB all that much as I haven't used it on AMD cards in a while, but from my recent testing in Skyrim, my Radeon 7970 does surprisingly well with it. Boris tends to ***** about both vendors, which is understandable.

 

192-bit bus width is standard for cards with 2 GB of VRAM, not 3. Its usage is down to the design, which is why 3GB memory will have limited (as in slower) memory bandwidth available resulting in a not so great performance if you really want to load it up. I agree with you on the other point that it's a marketing gimmick anyway, since a card like GTX 660 would be throttled by its raw power way before VRAM becomes a factors.

 

I disagree that anybody needs PhysX. It's quite useless and I even had a dedicated PhysX card for a while. It's a nice addition to have, but I wouldn't pay for it.

 

At least you can reason your arguments in an appropriate manner, there's nothing wrong with disagreeing with my point of view. Bear in mind that I've been using all sorts of hardware in the past 15 years or so, particularly not caring about being loyal to either Nvidia or ATI in the past few years, since the latter often offers superior value for money despite the bad rep it gets from uninformed consumers.

  • 0
Posted

CUDA is arguably worse than OpenCL that can run on all platforms. Its usage in certain applications is worth noting, but most people will have no use for it. Likewise for PhysX, there are better alternatives in terms of technology, but PhysX has more money behind it at the moment. Yet it's usage is limited to a handful of titles when it comes to actual visual improvements. The part about more effects at higher framerates is not necessarily true. A lot of PhysX enabled games cripple the performance, or at least used to in the past.

 

I can't really comment on ENB all that much as I haven't used it on AMD cards in a while, but from my recent testing in Skyrim, my Radeon 7970 does surprisingly well with it. Boris tends to ***** about both vendors, which is understandable.

 

192-bit bus width is standard for cards with 2 GB of VRAM, not 3. Its usage is down to the design, which is why 3GB memory will have limited (as in slower) memory bandwidth available resulting in a not so great performance if you really want to load it up. I agree with you on the other point that it's a marketing gimmick anyway, since a card like GTX 660 would be throttled by its raw power way before VRAM becomes a factors.

 

I disagree that anybody needs PhysX. It's quite useless and I even had a dedicated PhysX card for a while. It's a nice addition to have, but I wouldn't pay for it.

 

At least you can reason your arguments in an appropriate manner, there's nothing wrong with disagreeing with my point of view. Bear in mind that I've been using all sorts of hardware in the past 15 years or so, particularly not caring about being loyal to either Nvidia or ATI in the past few years, since the latter often offers superior value for money despite the bad rep it gets from uninformed consumers.

I think physx is definitely worth it SOMETIMES. For example,  Metro Last Light has superior lighting and particle physics. With physx enabled, the dust swirls dynamically around character models. Without it on, it just stays static and floats in place. Same with bullet sparks, etc. If you turn it on with an ATI card (even a 7970), fps drops to like 10 whenever there is a demand for physx, so basically whenever bullets or flying or you get to an area with a lot of smoke or vapor. 7970s seem to work fine for ENB as you said. I barely get an fps drop with Skyrealism on. 
  • 0
Posted

I haven't had a chance to play Metro Last Light, but I've heard good things about PhysX effects in that game. Then again, you won't be able to enjoy it with all bells and whistles on a GTX 660 at 1920x1080, so my argument in that context stands.

 

Like I said before, I do appreciate PhysX effects in games like Batman, but they could be done using an open technology if there was enough demand for it. As it stands, Nvidia is restricting PhysX for the sole purpose of having it as their USP. Developers get money to use PhysX in their games.

  • 0
Posted

I haven't had a chance to play Metro Last Light, but I've heard good things about PhysX effects in that game. Then again, you won't be able to enjoy it with all bells and whistles on a GTX 660 at 1920x1080, so my argument in that context stands.

 

Like I said before, I do appreciate PhysX effects in games like Batman, but they could be done using an open technology if there was enough demand for it. As it stands, Nvidia is restricting PhysX for the sole purpose of having it as their USP. Developers get money to use PhysX in their games.

Exactly and that's honestly ******** and I hope it is changed. I mean right when you start up Metro, you see at least 3 god damn Nvidia logos and a giant banner that says NVIDIA: THE WAY IT WAS MEANT TO BE PLAYED. I kind of just hate nvidia for putting up an ego similar to that of microsoft
  • 0
Posted

Right... I play at 1440x900 anyway, so, for me, I can run about all games with maxed out settings easilly with the 660GTX. I'm really not convinced that I should get the 7870 when the 660, for me, is the best mix between cost and performance. Don't want to break my bank...

 

Also, being able to use the GPU for more distributed computing projects AND PhysX, which can come in handy in quite a few games, adds a seal to the deal.

  • 0
Posted

CUDA is arguably worse than OpenCL that can run on all platforms. Its usage in certain applications is worth noting, but most people will have no use for it.

Sounds like we are going into an argument of Open Source vs. Proprietary Software. Which is a bit off topic. :)  

But regardless of that then it is a feature of Nvidia cards so it is worth having in mind when making a decision! 

I have not used it in years so I cannot speak for the specific performance and documentation for it today. I have only briefly looked up the details of today, since I just started on 3d modelling etc. as my new hobby. 

 

Likewise for PhysX, there are better alternatives in terms of technology, but PhysX has more money behind it at the moment.

I have only vaguely read about these better alternatives. The last one I saw promised the end of polygon rendering etc and could produce very nice images of static objects etc. However they never ever released their codes to the public, and only showed the result of the "revolutionary" methods on youtube. Guess there theoretically are a few ideas that have made it into prototypes but they still are a long way from convincing the industry to change standards. 

 

I partly disagree with your point requiring PhysX.

Imo. then PhysX is only maturing these years, since it has always suffered under DX9 due to the same reason we suffer... not enough memory. However this issue all the other technologies have also had, hence why it has been too costly an affair to try to wrest the market from PhysX on marginal performance gains. I think we are going to see more of them pop up in the following years as physics simulations become more demanding and required in games. 

Developers get money to use PhysX in their games.

Actually it is the game engine developers that decide it should be in their engines! Game makers are then pretty much bound to use it unless they want to implement some sort of merge between the softwares. It is true that Nvidia have deals here, and I am fairly sure they are mutually beneficial for both companies and royalties go both ways. 

 

Boris tends to ***** about both vendors, which is understandable.

Hehe yeah true that! Thanks for that laugh! ;) 

 

As for the memory bandwidth then yeah you are of course right that it will get outperformed by cards with a higher bus speed at tasks where this is the bottleneck. However this bottleneck is really only an issue if you are running seriously demanding graphics operations at high resolutions. Which most games today cant do anyways since they are mainly made for last generation consoles. Most of the time other bottlenecks will hit you hard first.  

 

Also thank you for keeping a civil tone! It is refreshing when dealing with this topic! Sorry if I sound a bit fanboish in some posts! I try my best to avoid it! We all fall into that trap every now and again I guess! 

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

By using this site, you agree to our Guidelines, Privacy Policy, and Terms of Use.