Jump to content

Fury (x)


Spock

Recommended Posts

https://videocardz.com/56561/amd-unveils-radeon-r9-fury-series

 

The price tag is much lower then expected. And it appearently caught up with Maxwell at resterization power efficiency.

 

 

 

Hawaii was already approximately en par at compute performance/watt. Cumpute is very architecture specific though. If you want to see it in benches, check for Furmark power draw and theoretical TFlops for an estimate.

Side note: https://top500.org/blog/the-inside-story-of-lattice-csc-1-supercomputer-on-the-green500/

 

 

 

NVidia might have a problem untill Fermi gets released. Kepler just sucks because it lacks critical features, and the 970 has the vram issue. So they really have only 3 viable cards for team green and those are in the higher price range atm.

 

More about the architecutres:

 

 

Why you don't want to by Kepler:

Maxwell and GCN 1.1+ have interally as much threads as ROPs, so they can work on another task when they have to wait for latencies. Besides Kepler has too few shader units to begin with. As more titles like the Witcher 3 come out that don't have the time to programm around this weakness and still do decent post processing, the problem will become more and more appearent. Not to mention any demanding tasks besides gaming. Sorry but that architecture was borked from the beginning :/

 

GCN 1.2's main feature is lossless color compression btw, which should save even more bandwidth. Afaik 3d graphics performance is currently not really limited by memory bandwidth though so the gain for gamere might me neglectable (unless you want high resolutions).

 

Pascal:

This will make me swing back to team green. Not that the gtx 980 isn't a great card but it only really excells at power efficiency at resterizing and costs more money then I'm willing to spend.

For many tasks you really only need half precision (16 bit), much of the very stunning global illumination stuff for example. If that can be handled with almost double performance, AMD will have to react really quickly.

 

 

 

[edit] Next gen NVidia is Pascal, not Fermi. Sorry for the blurp.

Edited by Spock
Link to comment
Share on other sites

I am looking forward to the Fury X. I am wondering if HBM will have any effect on performance with all the HD textures loading a heavily modded Skyrim needs to process. I was looking at replacing my pair of crossfired 290X Ubers with a single GTX 980ti but I think the Fury will be a better card.

Link to comment
Share on other sites

I don't think HBM will have a huge effect on Gaming performance at 1080p. Afaik the latency of HBM is acutally not lower then GDDR 5, I doubt the extra bandwidth will be of much use.

 

For compute this is a different ballpark. Afaik there are compute tasks which are severely bottlenecked by memory bandwidth. The better power efficiency and the advantages for the PCB layout will be huge for clusters. Also: 8.6 (theoretical) TFlops!

 

THat shader performance will probably allow you to go bonkers with ENB settings. This will probably be the card that can handle Skyrim at 1440p/1600p with demanding ENB settings.

Edited by Spock
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

By using this site, you agree to our Guidelines, Privacy Policy, and Terms of Use.