Well the R9 380 is about 10-15% faster... so it's up to you whether that's worth it for you. Both the GTX 960 and R9 380 are available in 2GB and 4GB flavours, so you could also get a 4GB GTX 960 or a 2GB R9 380... depends on what you're willing to spend and your priorities.
Performnace wise like I said the R9 380 is about 10-15% faster than the GTX 960. In terms of 2GB vs 4GB in most games it won't make a difference but there are a few where it will have an impact... for example in GTA V you ideally need more than 2GB of VRAM if you wish to play with High texture settings which does look better than Normal by a pretty good margin.
This is my personal preference on which GPUs I would rather have in order out of those in this price range:
R9 280X > R9 380 4GB > GTX 960 4GB > R9 280 > R9 380 2GB > GTX 960 2GB
I'd suggest just picking the best one from that list that fits your budget
Yep. Its performance is very very close to the GTX 960... they trade blows depending on the game. It just has a bit more VRAM which can come in handy if you want to play with higher quality textures
They would perform about the same. The 4690K would be slightly more "future proof" since you could upgrade to a more powerful video card, especially if you overclock it down the road.
However the R9 380 4GB is a bit better than the GTX 960 2GB, especially since the extra VRAM will allow you to run with higher texture quality in some modern games like GTA V, Shadow of Mordor etc.
You might want to look into getting an R9 280 3GB, it performs about the same as the GTX 960 but has 3GB of VRAM. Costs just slightly more.
I wasn't arguing, just making it clear which models don't have it =)
Resolution doesn't make any impact on CPU performance. By the time the GPU starts building the image out of pixels, the CPU has already done its job.
There will be more likely for a bottleneck, because at lower resolution the framerates will be higher, and higher framerate results in more CPU load (since the CPU processes a certain amount of stuff for each frame). But then you can just crank the settings up to get more out of the GPU.
For example let's say that in a certain game the i5-4460 were to become the bottleneck (that is, it reaches its maximum) at 60 FPS. It would also reach its maximum at 60 FPS at 720P or at 4K.
If you're playing at 720P it means you could very well have higher framerates than if you played at 1080P (with the same video card) since the GPU doesn't have to work as hard for each frame. But then at higher framerates you would reach the CPU's limits... like I said it just means that because there's less work for the GPU to do on each frame you can just crank the settings up if the GPU has some extra horsepower available :)
If you're playing at 1080P, I'd recommend Radeon R9 380 4GB or GeForce GTX 960 4GB. You can get away with 2GB versions of these cards, just means you won't be able to play some modern/future titles with higher texture quality. Some good last gen options are the R9 280X and GTX 770, however these consume a bit more power than the 380 and 960.
If you're playing at 1440P I'd suggest R9 290/290X 4GB or GTX 970.
Depends on the game and the specific CPUs in question. The Xeon 1220V3, 1225V3, 1226V3 do not have hyper-threading, so are virtually identical to a Core i5 4xxx at the same clock speed. For example a Xeon 1220V3 performs identically to an i5-4440.
Meanwhile the Xeon E3s that do have hyper-threading (being the 1230V3 and up) will perform the same as a Core i7 4xxx at the same clock speed. For example the Xeon E3 1241V3 will perform the same as an i7-4770.
Now comparing the 4690K and 1230V3, it will depend. In most games there will be almost no discernible difference between the two as performance will be nearly identical. If you were thinking of overclocking, then you'd probably get better performance in games out of the 4690K. If you were gonna run either CPU at stock, then they would probably trade blows with the Xeon 1230V3 pulling ahead a bit in modern (and future games).
Although if you don't want to overclock, then there's no point getting the 4690K anyways and you'd be well off with an i5-4460 or 4590 and saving $30+!
Nope. Many of the Xeon E3s do not have hyper-threading. Xeon E3 1220, 1225, and 1226 do not have hyper-threading.
Pretty small downgrade. It depends on the game. There are some games where there'd be a small difference, though it would be around 10-15% at the most. In most games you'll see less than 5% margin between the two CPUs.
I currently have an i5-4690K (at stock speeds) and it can run every single one of my games at well over 60 FPS 99% of the time (except MMORPGs which have massive framerate issues with every CPU ever)... so you should be able to expect similar out of the i5-4460.
If you want some specific numbers, this is what I get in my games usually at a mix of medium-high settings with a GTX 760 (I tend to have CPU intensive options turned up since it's a high end CPU, while anti-aliasing and ultra high quality textures I turn down because the GPU is mid range)
Battlefield 4 -- ~80 FPS minimum, 100 FPS average
Grand Theft Auto Online -- ~50 FPS minimum, 85 FPS average (99% above 60 FPS, very occasionally dips into the 50s when theres tons of explosions going on at once)
SMITE -- ~100 FPS minimum, 120 FPS average
AMD appears cheaper but the specs on paper don't tell the whole story, not in the slightest. There are a few very important things to know.
1) In AMD's FX series of CPUs and FM2/FM2+ socket CPUs, they don't really have as many cores as advertised
2) Clock speed doesn't tell you much about performance
3) Specs on paper doesn't tell you anything unless comparing CPUs of the same generation
Long story short, AMD's current CPUs have much slower cores than Intel's current CPUs. Where AMD currently excels is in parallel workloads on a budget... this means things like video editing/rendering, 3D modelling (in some applications, at least), virtual machines, some types of scientific calculations, AMD's FX lineup offers phenomenal value.
However in more mainstream usage scenarios... meaning gaming, web design, photo editing, Intel's CPUs are better by a pretty good margin. Games aren't parallel workloads, but rather are "bursty" workloads (inconsistent loads that don't scale over threads but have to be done in specific orders by specific threads/cores).
I'll expand on each point above if you're interested, and at the bottom of my post is my hierarchy for best to worst CPUs.
1) You may have heard of Intel's hyper-threading. Basically this is a technology Intel uses that allows one execution unit (the part of the CPU that completes an instruction) to complete instructions from two threads... so the software can use one core as if it were two cores, allowing more efficient scheduling of tasks. AMD doesn't (yet) use this technology, and instead their FX and FM2(+) CPUs use something called cluster multithreading. Basically they add an extra execution unit to a core, so some types of instructions can be completed simultaneously by a single core. However when it comes to marketing, AMD has advertised the execution units as cores.
For example, if you were to compare an FX-8350 to an i7-4770 in terms of what internal components it has, they both have four floating point units, four pools of L2 cache, four decoders... the 8350 just has eight integer execution units instead of four. It looks more like a quad-core that has a little more beef. In practice this works better than hyper-threading since it scales a lot better... the problem right now is that AMD's individual cores are very weak.
But yeah... basically AMD's FX/FM2(+) CPUs have half as many cores as advertised but then have something similar to hyper-threading. It's a really iffy topic because there's no firm definition of a "core".
2) Clock speed in a CPU only tells you how many cycles it completes per second... it doesn't tell you how much gets completed in each cycle. For example if you compared an FX-8350 and i5-4690K both at 4GHz, the i5-4690K's cores would be completing tasks roughly 50% faster. This is because of something called instructions per cycle. Intel's current CPUs complete much more instructions per cycle than AMD's currently.
So when you see stuff like an AMD 4GHz CPU for the price of an Intel 3.4GHz CPU... the 3.4GHz Intel CPU is still faster.
3) Because different CPU architectures have different strengths and weaknesses, you generally can't get a good idea of performance differences between them by looking at clock speed, number of cores, etc.
For example, comparing an FX-8350 to an i5-4690K, these are not good metrics for comparison... because even though the FX-8350 is listed as having twice the "cores" and a higher clock speed, they have very similar total processing power (FX-8350 only slightly edging it out, whereas based on specs you'd expect double the performance) and the i5-4690K's cores complete things faster in spite of the lower clock speed.
Whereas it's perfectly fine to compare an i3-4150 to an i7-4790 because they both use the same Haswell architecture... so performance scales pretty linearly with clock speed and core count.
Not all first-gen i5s were dual-core. The i5-600s were dual-core and the i5-700s were quad-core.
Don't expect AMD to be consumer oriented if they become profitable again. The company doesn't care more about you than Intel they just can't afford to have a wide range of SKUs with certain featuresets limited to certain chips.
Depends on what the OP is doing. If we're talking about a gaming PC, then absolutely Intel is the way to go. If they're talking about a productivity machine for running virtual machines, encoding video, 3D modelling, etc... the FX CPUs cannot be beat for their price.
Well if he does a lot of work with VMs, video encoding etc. the FX-9590 will do pretty well, but yeah overall an i7 would have been a lot better especially for gaming (unless he only plays MOBAs or source games) and I don't understand the mentality behind saving $50 on a crucial part of the build on a >$3000 rig.
No... I don't understand why people on teh web are so loyal to AMD, but this simply isn't true. The FX 9590 only matches a Core i3 in most games, and doesn't come close to an i5-4690K or i7-4790K in the overwhelming majority of titles when playing at high framerates. The FX-9590 will be a bottleneck long before similar priced Intel CPUs.
Video encoding and synthetic benchmarks are where the FX-9590 performs at its best, since these tasks are parallel. Gaming is not parallel... they can be written to have certain tasks run on separate threads, but they are not a gracefully scaling workload, and Intel is simply much better for gaming.
If you're playing at 45 fps with something like a GTX 750 at 1080P or an R9 290 at 4K, then yeah sure there's no difference between an FX CPU and an Intel CPU... but if you're looking for rock solid 60 fps in latest triple A titles or want to push 120/144 fps without frame dips, the Intel will be SUBSTANTIALLY better than the AMD CPU.
I don't know why everyone finds this such a hard pill to swallow...
Thanks for the advice. I was gonna get an EVGA SuperNova NEX but it seems they have a notorious noisy fan problem... I got the RM650 on sale from NCIX, and the SuperNova G2 was a lot more expensive at the time and I wasn't sure whether it'd have the same issues as the NEX. I'm hoping the RM 650 won't give me any trouble, right now my system demands probably around 250W under heavy load, so it's not being pushed too hard.
And are you sure about the thermal paste? Perhaps AS5 isn't the best, I'm not sure, but the pre-applied stuff was literally covering the whole contact area and very thickly applied, I think it would have done very poorly.
And 3.8GHz is just what it runs at by default across all cores. My understanding is that the way turbo works is that the CPU can run 3.5GHz across all cores, or 3.9GHz on a single core while the others downclock depending on the workload. Mine's doing 3.8GHz on all 4 cores at stock settings.
And I'll look more into changing the paste on the GPU... although I might just get an R9 290 or GTX 970.
It runs at 3.8GHz across all cores by default. Stable at 4.5 with 1.225V (so perhaps 4.7 @ 1.3 is doable) but I'm just running at 3.8 because it's more than fast enough for what I do with it, so it'd really just be extra heat.
Of course 2x GTX 770 will be better than one. Although if you're willing to spend that much, I would opt for a single GTX 780 Ti over dual GTX 770s. The raw performance won't be quite as good as 2x GTX 770s, but you don't have to worry about certain games that don't run SLI well, if at all. Additionally you'll have lower power consumption, less heat produced in the case, and never have issues with microstuttering either though that isn't that bad in SLI/CFX these days.
Also as someone else pointed out, it could be worth getting 4GB versions for SLI.. IF you plan on going for high resolutions. For 1080P, 2GB card is plenty... but you would likely benefit from 4GB cards if you wanted to play at 2.5K or 4K.
What about the case's PCI expansion slots; are there no issues with accessing the GPU's outputs with it rotated like this? Is there a large gap allowing dust to enter the case?
If there's a build video on this I'd be greatly interested in watching it.
Well as for a specific B75 motherboard, whatever is well rated and in a good price range for you. Basically the motherboards are like this:
B75 - No overclocking, smaller connectivity, lower memory speeds
H77 - No overclocking, more connectivity (and more sata 6 etc), better memory speeds
Z77 - Same features as H77 but also with overclocking (if CPU supports OC)
The 280X plays BF3 at ultra at 1080P like butter, I dunno about BF4 - it's hard to say as tons of people are experience issues with it crashing since it was rushed to release. The 270X should still play it well, but I'd expect medium-high for BF4, high-ultra for BF3 whereas the 280X would be ultra for BF3, and probably ultra (or at least close to it) for BF4 as well.
It should be fine, the 3470 is better than an FX 6300 which wouldn't bottleneck it either. But there's no point getting a Z77 board with that CPU. The Z77 board is designed for overclocking, but the 3470 cannot be overclocked (only CPUs with a K at the end can be). Might as well drop down to a B75 motherboard if you're getting the 3470. If you did want to overclock you'll need the 3570K.
And I have no idea what the requirements for Watch Dogs are like, but Battlefield 4 it should play it fine on high settings at 1080P at 60 fps. At 720P a $150 card could play it at ultra since 720P is half as demanding as 1080P, but 1080P on low looks better than 720P on ultra :P