Spock Posted June 16, 2015 Posted June 16, 2015 (edited) https://videocardz.com/56561/amd-unveils-radeon-r9-fury-series The price tag is much lower then expected. And it appearently caught up with Maxwell at resterization power efficiency. Hawaii was already approximately en par at compute performance/watt. Cumpute is very architecture specific though. If you want to see it in benches, check for Furmark power draw and theoretical TFlops for an estimate.Side note: https://top500.org/blog/the-inside-story-of-lattice-csc-1-supercomputer-on-the-green500/ NVidia might have a problem untill Fermi gets released. Kepler just sucks because it lacks critical features, and the 970 has the vram issue. So they really have only 3 viable cards for team green and those are in the higher price range atm. More about the architecutres: Why you don't want to by Kepler:Maxwell and GCN 1.1+ have interally as much threads as ROPs, so they can work on another task when they have to wait for latencies. Besides Kepler has too few shader units to begin with. As more titles like the Witcher 3 come out that don't have the time to programm around this weakness and still do decent post processing, the problem will become more and more appearent. Not to mention any demanding tasks besides gaming. Sorry but that architecture was borked from the beginning :/ GCN 1.2's main feature is lossless color compression btw, which should save even more bandwidth. Afaik 3d graphics performance is currently not really limited by memory bandwidth though so the gain for gamere might me neglectable (unless you want high resolutions). Pascal:This will make me swing back to team green. Not that the gtx 980 isn't a great card but it only really excells at power efficiency at resterizing and costs more money then I'm willing to spend.For many tasks you really only need half precision (16 bit), much of the very stunning global illumination stuff for example. If that can be handled with almost double performance, AMD will have to react really quickly. [edit] Next gen NVidia is Pascal, not Fermi. Sorry for the blurp. Edited July 2, 2015 by Spock
paradoxbound Posted June 18, 2015 Posted June 18, 2015 I am looking forward to the Fury X. I am wondering if HBM will have any effect on performance with all the HD textures loading a heavily modded Skyrim needs to process. I was looking at replacing my pair of crossfired 290X Ubers with a single GTX 980ti but I think the Fury will be a better card.
Spock Posted June 18, 2015 Author Posted June 18, 2015 (edited) I don't think HBM will have a huge effect on Gaming performance at 1080p. Afaik the latency of HBM is acutally not lower then GDDR 5, I doubt the extra bandwidth will be of much use. For compute this is a different ballpark. Afaik there are compute tasks which are severely bottlenecked by memory bandwidth. The better power efficiency and the advantages for the PCB layout will be huge for clusters. Also: 8.6 (theoretical) TFlops! THat shader performance will probably allow you to go bonkers with ENB settings. This will probably be the card that can handle Skyrim at 1440p/1600p with demanding ENB settings. Edited June 18, 2015 by Spock
Spock Posted June 19, 2015 Author Posted June 19, 2015 Sorry for the double post but official benchmarks are leaked: https://videocardz.com/56711/amd-radeon-r9-fury-x-official-benchmarks-leaked They should be taken with a grain of salt though. Fury will probably benefit much more from it's memory bandwidth at 4k then at 1080p. Also, AMD will have picked the most unfavorible settings for NVidia.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now