What’s your biggest PC bottleneck?

Modern PCs are incredibly powerful. They have multi-core CPUs, massively parallel graphics processors, gobs of memory, and ever-larger solid-state drives. Some of them are still bottlenecked, though, whether by a lack of horsepower, sufficient memory, speedy storage, or even battery life to keep a mobile machine running off the mains.

Those limiting factors depend on not only the system configuration, but also what you're trying to do—and where you're trying to do it. And that got us thinking: what's the biggest PC bottleneck in your day-to-day life?

We've listed a handful of options in the poll below, and you can add your own in the comments. Now, go vote!

Comments closed
    • VincentHanna
    • 5 years ago

    Its that darn core 1 on my CPU. Its always at 100% while the other 11 cores hum along happily at less than 20%

    • Xenolith
    • 5 years ago

    My internet connection. I live in Alaska, and the ping times are pretty terrible.

    • Goofus Maximus
    • 5 years ago

    My own PC bottleneck is… my wallet. Simple as that.

    and I should have appended my reply to the “money” reply… Oops.

    • kureshii
    • 5 years ago

    As a small form factor enthusiast running a Thin-ITX build, definitely cooling. It’s a conscious choice, but nonetheless still a bottleneck. Interesting just how much you can fit into a passively cooled system these days though.

      • fredsnotdead
      • 5 years ago

      Passively cooled? What’s in it?

    • GasBandit
    • 5 years ago

    The fact that games are still being developed and launched that don’t know how to utilize multicore processors is my biggest bottleneck.

    • kamikaziechameleon
    • 5 years ago

    I’m super surprised that GPU is not reflected in the comments more. Look at the poll results.

    To put is simply the #1 challenge when shopping for mobile machines is finding one that combines form factor, and GPU w/o costing over 2 grand. The new razor laptop:

    [url<]http://www.razerzone.com/gaming-systems/razer-blade[/url<] That machine is one of the only GPU adequate machines and its ridiculously expensive. Sure a surface pro 3 starts at $800 but the integrated GPU in that model is a bigger joke than the neutered processor. #1 thing limiting mobile computer is availability of the GPU, Most expensive desktop component is GPU, it gets replaced every 2 years and its outdated soon as you plug it in. This is amazing because it has felt as though GPU tech has been growing by leaps and bounds since its inception.

    • deinabog
    • 5 years ago

    I’d say the biggest bottlenecks on both my PCs and laptop would be the mechanical hard drives. I haven’t moved up to SSDs yet for booting although the motherboards in both boxes came with 32GB units (for caching).

    • burntham77
    • 5 years ago

    My hardware is plenty fast, so at this point I am just on the look out for increasingly quieter cooling solutions.

    • Billstevens
    • 5 years ago

    Video cables are a pretty bad bottle neck if you are hopping for 4k or 8k above 60 Hz. The still upcoming HDMI update is too little too late. The latest upcoming display port standard seems to do a bit better but may still end up being a bottle neck.

    My display itself is an old 1080p LCD and its holding back what my R9 290 could be doing on a much better display.

    • tygrus
    • 5 years ago

    I answered other = The user on the other side of the keyboard ๐Ÿ™‚

    Other possibilities = AntiVirus etc. security software; never ending OS & app updates every day (or every time I start using it); slow internet (eg. poor ADSL link >4km from exchange); the non-existing task that takes time but no resources before it times-out and the Windows/web/app continues (checked >10 sub-system counters on local and server which are all <10% active, most <1%, yet I’m still waiting >10secs for response); single threaded apps; apps with synchronous 1 IO at a time disk access; apps with memory/handle/etc. leaks.

    Hardware manufactures blame the software designers and then the software designers blame the slow hardware …

    • tbone8ty
    • 5 years ago

    not wanting to buy an intel cpu/mobo because im a silly fan boy. godang amd ๐Ÿ™

      • chuckula
      • 5 years ago

      I upthumb because your complete lack of capitalization makes you the anti-SSK.

    • Waco
    • 5 years ago

    I put storage (because I do HPC storage) but generally it’s bad programming that’s really the issue…

    At home it’s usually my GPU (GTX 770).

    • Mad_Dane
    • 5 years ago

    My wallet!

    • btb
    • 5 years ago

    I checked “Other”: Poorly written software/device drivers/os as the biggest bottleneck.

    My PC should be plenty fast(haswell/geforce 770/16gb ram/960gb ssd), but at times i still get weird audio stuttering in certain games.

      • Meadows
      • 5 years ago

      Something’s wrong with your PC. You should never have that issue.

      Consider checking without antivirus, without firewall, and/or without internet altogether, because poor network drivers or bad low-level networking applications in particular have done this to me in the past.

    • ET3D
    • 5 years ago

    The biggest bottleneck of my desktop PC is electricity – it hasn’t been hooked up for a couple of months now.

    The biggest bottleneck of the laptop I currently use is the CPU, an AMD E-350.

    • wingless
    • 5 years ago

    I should buy a 2nd GTX 760 or save for something insanely powerful. You can never have a GPU that’s too fast.

    • gerbilspy
    • 5 years ago

    I am the bottleneck, and will soon be replaced with a more efficient cybernetic me.

    • hasseb64
    • 5 years ago

    4K=GPU POWER IS LACKING…end
    CPU? Ha! 1% maybe

    • jibkat
    • 5 years ago

    My computer chair!

    Really this 20 dollar Chinese knock off of an office chair is not doing good things to my back and really does need to be replaced =/

      • THE RAVEN
      • 5 years ago

      you should see my chair…then you would be Proud of yours. lol

    • KeillRandor
    • 5 years ago

    My indirect bottleneck is my sound card – it’s the only reason I’m having to stick with windows XP – which is the main bottleneck… (Don’t have the money to update it – it cost me ยฃ500 new! (Edirol DA2496))

    • colinstu12
    • 5 years ago

    none?

    I have no qualms about any of my computer setup / peripherals / network / etc. It took a few years and a bit of money but it’s finally complete.

    • BIF
    • 5 years ago

    Equally between CPU and GPGPU (technically not “graphics”). So I had to choose CPU in the poll…

    • Theolendras
    • 5 years ago

    Still doing OK with a mildly overclocked Phenom II X6 1045T, it’s defenitely the weakest link in my setup, but I think I will be good to wait for both Skylake for desktop and Braswell for a nice 2 in 1.

    For now tough I’m still hanging surprisingly well, I’m temped by bigger higher resolutions screens, but I’m hopeful by the time Skylake arrives, GPU with stacked DRAM and probably all DX12 features and freesync will most likely be available as well, I feel these will be required anyway to have a real great experience with some new generation title port for 4k without compromises during the current gen console lifetime.

    Meanwhile I do compromise somehow, but I don’t feel it much, the overclocked 7850 is still doing it (CPU is probably holding it back more often than not) even in the heavier newest AAA titles won’t go frequently lower than 45-50 fps, with medium-high settings on 1080p, I feel this is still decent so long it’s no competitive online shooter in these lastest titles, which I’m not into much (more a RTS/adventure fan for online experience) and there is a lot older titles I ought to complete. Even then the more demanding new titles I’m expecting will be Mantle compatible (looking at you Star Citizen), and this might be capable to be borderline acceptable because of it (if it can extract enough performance out of 6 cores), athough this one might be really a strech, worst case I will be awaiting 6-9 months before playing it while enjoying other awesome titles, this is really a golden age of gaming, too many games, so little time.

    Definitely, for a system originally bought in 2007 (yeah I upgraded most parts eventually, new CPU, USB 3 add-on card, ssd, GPU changed twice, Vista to Windows 8), this is a ridiculous lifetime, and it maybe could be converted to a steamOS machine later on for the by then oldies in home multiplayer sessions

    Still I’m planning to move the DVR, Plex, transcode duty, file server, backup, murmur server, and pretty much any server duties to a dedicated J2900 more efficient nas-like system with freenas or Ubuntu by the end of the year, this will be a cleaner wholehome setup and offloading these responsabilities will keep me from fiddling before a gaming session.

    • ShadowEyez
    • 5 years ago

    Some good answers here. I picked battery while on my iPad. But as computers branch out in to different uses, the bottlenecks for different computers and their apps change.

    Traditional Desktop or scientific computing – CPU, maybe GPU
    Mainstream box – HDD speed
    Gaming – GPU, display tech
    Laptop/mobile – battery life
    Financially challenged students/people – money

    In general I think software could use optimizing, as we have so many layers between the hardware and the end user that the inefficiencies can really add up (see the DX12 and mantle stuff that is being done to work on some of that).

    <soap box>
    The unconventional answer though is users: only roughly 1 billion of the worlds people have access to computers/smart phones/the internet. If more people had access to these resources most of us take for granted, this world could be a better place.
    And then we could have more engineers that design and build faster computers ๐Ÿ™‚
    That”s the real bottleneck.
    </soap box>

      • Theolendras
      • 5 years ago

      Fair points, I’m actually quite happy for the turning of events, the current Intel putting major advancement of performance on hold until the thermal budget can target both tablets and high end server might have created a performance plateau accelerating the need for new software techniques, which is better for the consumer in the end, because you get comparable experience from lower end hardware than otherwise would be…

    • kerwin
    • 5 years ago

    I’m running a 4790 with a GTX780 and two SSDs. I’m the bottleneck.

    • humannn
    • 5 years ago

    Definitely graphics. I have an R9 290 and I know there’s still no way I can game at 4k with good framerates. So I’m just sticking with my 1440p monitor until technology catches up.

    • My Johnson
    • 5 years ago

    At first I was like, “My PC is fine.” Then I started working with an extrusion in Adobe Illustrator. That changed my mind real quick about my CPU.

    • dragosmp
    • 5 years ago

    The 1080p monitor

    Need 4x 144Hz variable refresh rate

    • travbrad
    • 5 years ago

    I feel pretty happy with what I have now, so it’s hard to say. I guess my GPU (GTX 660) is probably my biggest bottleneck, but since I still have 1080p monitors it’s still doing fine. In any case there is nothing to upgrade TO for the same price. 20nm GPUs can’t come soon enough.

    A larger SSD would be nice too. I currently have a 128GB Crucial M4, but I’ve had to start installing some games on my WD Blacks instead. For now it’s not too bad (older/indie games don’t really need a SSD anyway) but I can see it’s going to be an issue in the future as my Steam library consumes my storage space like The Blob.

    At the rate Intel is going my 2500K will last till 2020.

    So basically I am in agreement with the poll results so far. :p

    • jackbomb
    • 5 years ago

    Currently graphics. Core i7 4930k @ 4.5GHz stuck with a vanilla GTX 650.

    I wouldn’t be surprised if my CPU could outperform my graphics card just using WARP.

    • Philldoe
    • 5 years ago

    I… I don’t think my PC has a bottleneck anymore. If anything my PC is overpowered for the tasks it is used for. I suppose a 4k monitor would even things out some =/

    • MarkG509
    • 5 years ago

    I voted “Other” because the bottleneck is getting ideas from my head, through the so called “human interface devices” (keyboards/mice/touch-screens), into working code.

    Eventually, we’ll just be able to put on a “hat” of some sort and computers will just DWIM (do what I mean). Remember that StarTrek episode where Spock’s brain told Bones how to reconnect it to his body? Yeah, that…

    • Laykun
    • 5 years ago

    GTX 670 SLi just ain’t cuttin it for nvidia surround these days >.<

    • ultima_trev
    • 5 years ago

    GPU.

    Was hoping R9 285 would give roughly R9 290 performance in a <150 watt footprint to replace the slow as molasses HD 7850, but alas my hopes and dreams as always are crushed by the cruel mistress of fate.

    Maxwell Kenobi, you are our only hope…

      • Krogoth
      • 5 years ago

      28nm process is the problem. Maxwell isn’t going to create miracles. You will probably have wait until 20nm parts to come out.

    • Jason181
    • 5 years ago

    CPU for 100+ Hz gaming. Source games can do it, but most newer games are in the 60-80 fps range. You can get a graphics card (or two or three) that are capable of achieving consistent 100+ fps, but there’s not a cpu available that will do it for most current games.

    We need more clockspeed cap’n! And yes, I do overclock. Some of that is certainly threading (most games take 2 threads, newer take 3-4, with Crysis 3 taking 6-7!).

      • Ari Atari
      • 5 years ago

      Right here with you on that. In many cases I’ve found myself CPU limited because either the game can’t handle multiple threads or relies on the CPU much more than the GPU at high framerates.

      The Source game TF2 is an odd one. Give it a 4770k but pair it with an old 4890 and it somehow goes 144hz at mostly full specs.

    • SomeOtherGeek
    • 5 years ago

    Mine is I/O, so I chose storage. But all I/O is the bottleneck. SATA, USB, HDMI, RJ45 to name a few. Communicating between systems is what is slowing my work down. I have VMs, yes, but even communicating from my main box to the VM is not all that fast. It is not horrible, but it is noticeable.

    It is the reason I don’t always buy the top-of-the-line parts cuz I can’t utilize them. Of course, my ego needs feeding every once in a while, then I go crazy. Then I regret it and chalk it up as “future-proofing”.

    • Kingcarcas
    • 5 years ago

    Not sure, I have an i3 with a 650Ti

    • UnfriendlyFire
    • 5 years ago

    Geoff, you should’ve included ISPs.

    What good is a $2000 gaming rig if your internet connection has 200+ ms ping and you can’t watch 720p Netflix or Youtube?

    Oh, and you also have to ration movies and Steam downloads because of a 250GB data cap. God forbid if you have to reinstall an OS and download all of the updates for it.

      • Jason181
      • 5 years ago

      A $2000 gaming rig on another continent?

    • DeadOfKnight
    • 5 years ago

    Sex appeal

      • Kingcarcas
      • 5 years ago

      This, too lazy to ever finish “sleeving” and tidying the cables, and i don’t clean it until it gets really dusty…..

    • Derfer
    • 5 years ago

    The two leading are pretty spot on. Storage even with SSDs has huge room for growth, though the benefits aren’t always noticed in games and apps that aren’t setup to take advantage of recent speed increases.

    GPUs as far as enthusiasts are concerned are horribly lacking. Too weak for 4k, higher AA levels, higher refresh rates. I’m about 2 years away from being able to run Crysis 3 at 4k 60 FPS on a single gpu, about 3 for adding MSAA, and about 4 years out if I want to add higher frames to that, but g/free sync will help cut the need for it.

    Interestingly though old and current cards do fine if you use lower resolutions and AA levels, overall settings. So you end up with some people feeling like GPU power is just fine.

    • w76
    • 5 years ago

    My chair. Seriously. Waiting on a good deal on something with good back support and that won’t let me easily slouch, if possible, but the popular ones people point out some times cost $300-$1000. I’m doin’ okay, but that’s a lot for a chair.

    • Krogoth
    • 5 years ago

    In the practical sense, a killer app. There’s no killer app in the gaming and mainstream world that demands more than what the current quad-core solutions yield.

    In the technical sense? Probably the GPU, then followed by the CPU, memory bandwidth and PCIe lanes (number of them) then and finally I/O throughput. If I had a HDD as primary storage device than I/O throughput would have been on the top of the list.

    • blastdoor
    • 5 years ago

    Maybe the biggest bottleneck is the communication of my intent to my computer. In some sense, a computer is like a co-processor to the human brain. The human brain is incredibly good at certain things (generally speaking, pattern matching) while computers are incredibly good at a different set of things (generally speaking, the serial execution of very specific instructions). But the communication between these two amazing processing units is incredibly slow.

    • CeeGee
    • 5 years ago

    I voted other as I think my PC is pretty well balanced right now. This changes over time of course, the graphics card is always the first thing that needs upgrading for me.

    I guess the biggest bottleneck at the moment is poor console ports.

    • tipoo
    • 5 years ago

    Probably me. My computer switches hundreds of millions of transistors at up to three billion times per second and another few billion in the GPU at hundreds of millions of times per second, and I’m just sitting here masticating.

      • Pwnstar
      • 5 years ago

      You sure that’s masticating?

        • tipoo
        • 5 years ago

        Chewing, yes.

        Chewing while fapping.

      • fredsnotdead
      • 5 years ago

      PEBKAC (Should have been an option, I had to select “other”)

    • Amazing Mr. X
    • 5 years ago

    By far, my biggest bottleneck is my CPU. I know I’m going to get flak for this, but my 3770K bottlenecks the life out of my GTX 680 in a number of video games. Overclocking helps, but really it’s just a drop in the bucket in terms of the delays I’m talking about. You might argue against it, you might even have valid points, but to me the situation is pretty clear. When my GPU sits at 40% usage or less, my G-Sync monitor somehow manages to produce stuttering, my gaming mouse experiences random input lag, my FPS refuses to increase, and my CPU has at least one core pegged at around 100%, it’s pretty clear to me that there’s a processing delay in there somewhere. True, it’s not in all games by any stretch of the imagination, and I know this is essentially a problem with these programs being poorly coded. Still, I find myself hoping that either AMD’s or Intel’s latest architecture knocks performance out of the park so that I can finally get something over 45 stuttering frames per second in Assassin’s Creed 4 and ETS2.

      • Krogoth
      • 5 years ago

      What kind of resolution, level of AA/AF are your running? If you are on the lower-end of the scale. The CPU (clockspeed) becomes the bottleneck. If you trying to shoot for the high-end, then the GPU quickly becomes the bottleneck.

        • Amazing Mr. X
        • 5 years ago

        Usually no AA at all in the titles where I have problems. As for AF, I’ve never found it to hinder performance in any measurable way, but I usually go ahead and drive it up to x16 unless I’m desperate to increase my performance.

        Assassin’s Creed 4 just performs badly across the board. Minimum settings get me into the 60 fps range at 1080p, but PhysX is outright broken even with a ridiculously powerful dedicated PhysX card in the 560 Ti 448. At about 98% completion I pretty much just gave up on it out of frustration.

        ETS 2 is a whole other animal though. I try to play the game exclusively in Stereoscopic 3D [i<](because otherwise I crash into things a whole lot)[/i<], but turning on 3D cuts the frame rate to a third at best in any city. Realistically though, we're talking about 25% of my non-stereoscopic fps on average in populated areas, which is placing it firmly in the twenties. Fiddling with the settings reveals that the problem is almost exclusively related to the Mirror Reflections settings, however lowering the setting is a no-go because only the highest one works correctly in 3D. Mirrors are a bit important to have in a truck driving game, so I'm basically stuck dealing with the CPU weight of intense shader complexity. If they instituted some basic threading I'm sure my GPU usage would shoot up over 50%, but as I said this is all almost exclusively a coding issue.

    • riviera74
    • 5 years ago

    I can solve all my PC bottleneck problems if only one thing did not stand in my way: lack of funds to actually build a new PC!

    • deathBOB
    • 5 years ago

    My wallet.

      • not@home
      • 5 years ago

      ^^This^^

    • Kougar
    • 5 years ago

    Core count. Folding, videos, streaming, games, Virtual Machines, and web browsers all eat cores, so there’s never enough to run F@H in the background on a few of them without the system bogging down. Just GPU folding alone requires 1 dedicated core and 2 for best performance.

    • RICJUN
    • 5 years ago

    I think no one really knows the real capacity of his own machine…
    Probably a better interface to work and a better manner to exchange data from the “cloud” should make clearer that we can obtain much more from the same PC that we all know…

    • Alistair
    • 5 years ago

    Pretty obviously the biggest brake on your PC is the current console generation.

      • Krogoth
      • 5 years ago

      It is far more complicate then this.

      Suffice to say, that it is a PITA to code program that harness more than two threads effectively and not everything can be parallelized.

        • Jason181
        • 5 years ago

        Taking “current” to mean in use now, the PS4 and Xbone both have 8 threads, don’t they?

    • DarkMikaru
    • 5 years ago

    My vote goes to storage, hands down.

    Go find any old computer and then upgrade the hard drive to a newer drive or SSD and watch the performance skyrocket. Hell, I bet even an old Athlon 64 / Pentium 4 would be viable if paired with an SSD. That would be a fun experiment actually. I’ve got an old Pentium4 520 lying around which I might experiment with someday. ๐Ÿ™‚

    • tahir2
    • 5 years ago

    Biggest bottleneck? Me.

      • Krogoth
      • 5 years ago

      If you want to go that route, then PEBCAK has been the bottleneck since the first successful electrical computers. ๐Ÿ˜‰

        • Haserath
        • 5 years ago

        Error: Please insert twinkie to continue.

      • TwoEars
      • 5 years ago

      Yupp. Indeed.

      • Anonymus_notthetroll
      • 5 years ago

      omg thank you! lol. I was getting concerned/self conscious (ha) here that NO one was saying “imcompetence” or something. Infinity thumbs up!

      No money or being just a plain ol’ dumb@$$….haha lol (Metonymy/Tahir2 ftw)

    • jdaven
    • 5 years ago

    It’s been so long between polls, we don’t even get a summary of the last poll. Nice.

      • Meadows
      • 5 years ago

      To be fair, the last poll was almost as dull as this one.

    • ALiLPinkMonster
    • 5 years ago

    I’ve got a solid system, but I feel like my processor is held back a little by the somewhat restricted airflow of my Silverstone RVZ01. My next upgrade will be liquid cooling.

    • Mightyflapjack
    • 5 years ago

    Should be separated by desktop vs. mobile, they have entirely different priorities.

    My “other” vote would be:

    Lack of innovation in program utilization of multi-core processing.

      • Krogoth
      • 5 years ago

      There is a already a ton of utilization and innovation in multi-core processing since multi-core chips became commonplace, the problem is that returns diminished by complexity of the code and not everything can be parallelized.

    • Voldenuit
    • 5 years ago

    PEBKAC.

    • the
    • 5 years ago

    Other – the lack of optimized software for the hardware I already have.

    More often than not, I’ll come across a process that’ll load a single CPU core up to 100% and not touch any others. Granted there is always going to be a serial portion of code by merit and hence limitations of how much performance can be extracted ([url=http://en.wikipedia.org/wiki/Amdahl’s_law<]see Amdahl's law[/url<]). Same holds true for parallel applications that could go even wider - using GPUs when possible. A great example of this would be Handbrake for encoding. Only [url=https://trac.handbrake.fr/wiki/GPUAcceleration<]one minor step has OpenCL acceleration[/url<] with QuickSync support still in beta. Another area where software can almost be universally improved is more aggressive usage of caches. Memory is radically cheap nowadays enabling whole data sets to reside in memory. For example, why don't applications like iTunes buffer the entire next song on a play list in memory for playback? Often I'll listen to a song for a few minutes and then skip to the next one where there is noticeable lag as it loads from disk. Cache that MP3 and then immediately decode a portion in memory for simple playback. This will even save battery life in many scenarios since loading everything once into memory takes less energy than piecemeal loading. Operating systems attempt to do this nowadays but are rather conservative when it comes to using memory for this. I get why applications would default to light caching as they need to run on the widest range of hardware. That's no excuse though for designing a program that scales to a given environment. Lastly is the slow adoption of new CPU extensions. Not all new instructions are useful for a given application but it shouldn't take half a decade to produce a code path that does on occasion.

    • kravo
    • 5 years ago

    My PC’s bottleneck is my complete lack of time to take care of it. (And use it as much as I used to.)
    But soon, I’ll replace parts with new, stronger ones.

    I just need a good reason that justifies the expenditure.

    • kamikaziechameleon
    • 5 years ago

    On pc it isn’t hard to chew up all the GPU horse power you throw at a machine. Additionally game engines have lots of scaling issues. For example we all know dual GPU’s average about a 25% increase in performance instead of DOUBLING the output with the doubling of horse power. GPU’s feel outdate the day you plug em into your machine. RAM, CPU, Storage, those all have long half lives in games and desktop relevancy but GPU’s are always behind the software they are designed for.

    • Chrispy_
    • 5 years ago

    [b<]Other:[/b<] It's absolutely STAGGERING how inefficient software can be. Timeouts and inefficient duplication of work in code. How, with SSD's, multicore 4GHz processors, and multi-Gigabit interconnects does it still take five seconds to open Photoshop. To a computer, five seconds is a hundred billion lifetimes. Why do network adapters take so long to initialise if the switching hardware at the other end works at hundreds of Megahertz? and the NIC is initialised during POST? Even talking about basic functionality, operating systems are just ridiculously bloated these days. Nearly 20 years ago it took about 30 seconds from a cold start to opening a word processor; running Windows 3.1 off a 4500rpm hard drive plugged into a 66MHz 486. Today, for all the advances in speed, it still takes about 30 seconds to open a word processor.

      • exilon
      • 5 years ago

      [url<]http://en.wikipedia.org/wiki/Wirth's_law[/url<]

        • Chrispy_
        • 5 years ago

        Damn straight!

        I wonder what would happen if it was possible to boot Windows Vista (release version) on my old 486DX2. Would it boot in under an hour?

      • blastdoor
      • 5 years ago

      Great point.

      And it’s particularly galling when this inefficiency comes from the OS. I can understand that smaller app developers just can’t afford to be as efficient as possible, but there’s no excuse for OS vendors not being on top of this kind of stuff.

      • Krogoth
      • 5 years ago

      It is because code has become far more complicated (millions of lines versus thousands) and it is tailored for a larger range of architectures and platforms (higher-level) instead of being coded for a specific platform/architecture.

      FYI, my current system loads-up older and small applications within a few seconds and booting from a cold start only takes 20 seconds. This is mostly due to the SSD than anything else.

      • w76
      • 5 years ago

      To add to the list of things regarding software… Not taking advantage of new extensions to x86, like the latest additions to SSE and AVX, or even older additions to SSE in the name of “compatibility.” I know it takes developer resources, but why even have such hardware just so it sits idle in 90% of the cases where it could provide a substantial boost? Are customers/consumers (of software that eats substantial resources) on average that oblivious to the potential that it makes no commercial sense to put in the time?

        • Chrispy_
        • 5 years ago

        Heh, I hadn’t considered this but that’s also a damn good point.

        Like, why is there still a 32-bit version of Windows?!?

      • UnfriendlyFire
      • 5 years ago

      Because not everyone can create extremely complex software with low level or assembly codes.

      Although I’ve seen a few sloppy and bone-headed coding practices…

    • ptsant
    • 5 years ago

    I think my system is pretty well balanced:
    – AMD 8350 cooled by Corsair H80i and slightly undervolted
    – 16 GB ECC 1333
    – Samsung 840 500 and 840 EVO 750, 2x2TB WD RE
    – Seasonic X-750
    – AMD 7950 (I also have a 2x280X but I am using the 7950 because it produces less heat and is sufficient)

    If I could change some parts for free, I would probably go for the AMD 8370, 32 GB ECC 1866, 2x4TB WD RE.

    Anyway, I voted for memory because 1333 is oldskool.

    • sweatshopking
    • 5 years ago

    I’m running a 4670k and a 290x with a mechanical hard drive. Wife says I’m not allowed to spend more money on my computer.

      • albundy
      • 5 years ago

      cant you ask the man of the family for some money?

        • Ninjitsu
        • 5 years ago

        Maybe they have this equality thing going on. Strange!

          • albundy
          • 5 years ago

          i have no idea what that means.

            • Ninjitsu
            • 5 years ago

            Means there is no “man of the family”, they run it at the same level, as partners, with different tasks.

            Anyway, SSK’s explained so I should shut up now.

          • sweatshopking
          • 5 years ago

          we have different tasks. i’m a stay at home dad, and my wife is a student. I home school my kids, as we tend to move a lot (my wife has gone to a number of different universities over the past few years), clean, cook, give her the massages she seems to need daily. She handles the finances, and I don’t. Works for us.

            • SomeOtherGeek
            • 5 years ago

            That is the definition of a family. Compromises.

            Don’t worry, SSK, your SSD will come anytime – they are becoming cheaper and cheaper.

      • Meadows
      • 5 years ago

      Weren’t you the guy who balked at subscribing to TR for a dollar?

        • sweatshopking
        • 5 years ago

        No. 5 DOLLARS.

      • derFunkenstein
      • 5 years ago

      You need for your hard drive to have an “accident”

      • Krogoth
      • 5 years ago

      You realize you can get a modest SSD for under $199 and use it as a boot device? It will make a massive impact on your general computing experience.

        • sweatshopking
        • 5 years ago

        Of course I do. I had one previously in a computer I sold for a 400$ profit, then bought my new one. But when buying my new one, I wanted a better gpu instead of an ssd.

    • kdashjl
    • 5 years ago

    My GTX 260 it just can’t keep up with modern games maybe I’m asking too much for such an old card

      • Chrispy_
      • 5 years ago

      Everything else the same age copes just fine though…..

    • GodsMadClown
    • 5 years ago

    My bottleneck is cheese. I find it hard to make a computer with enough cheese throughput. I need Scott to build me a value/performance scatter plot of 99th percentile cheese delivery times.

    • Rakhmaninov3
    • 5 years ago

    Being 7 years old

    • internetsandman
    • 5 years ago

    I voted graphics because networking wasn’t an option (and isn’t really the fault of my PC anyway). I live in a massive apartment complex that doesn’t do a great job of providing low-latency response, so online games basically have the response pattern of a galloping horse with a limp. It’s sad that nothing I can do and no service I pay for can alleviate the fact that I’m relying on the traffic of 360 different appartments to be quiet enough to lower latency.

      • cmrcmk
      • 5 years ago

      I’ve never lived in a large apartment building, so pardon my ignorance but it sounds like you’re saying that everyone in the apartment passes through the same switch? Does the building run it’s own LAN or does the cable/DSL company have insufficient hardware to handle so many connections?

    • localhostrulez
    • 5 years ago

    Think about this from an average user perspective for a moment. As someone who uses an ultrabook (t440s) with external peripherals and doesn’t game, I really don’t have a bottleneck, except myself perhaps. CPU? Haswell ULVs are good enough for most use. GPU? I can drive multiple monitors and do everything I need to just fine. GPU acceleration works well enough that I can play 1080p videos on multiple displays simultaneously without lag (when will I ever need to do this anyway?). Memory? I maxed this system out (only 12GB), and can fire up multiple VMs without touching the pagefile. Very rare that I go anywhere near the limit otherwise. Storage? SSD yo. Cooling? This machine runs pretty cool as is. Battery life? I’ve got lots of battery capacity (~95Wh total), and I’m running ULV.

    The one thing I might wish for is better CPU efficiency. The battery life of this machine (10-12 hrs with touchscreen) without the bulk of the extended battery would be sweet… That, and power saving features that don’t mess something else up in the process. There’s a setting in the video driver that’s supposed to save battery with minimal impact to display quality as they claim (“Display Power Saving Technology”), but unfortunately, I can tell the difference and it bugs the crap out of me. And there are wifi drivers that glitch if you let Windows turn the card off to save power…

      • Mark_GB
      • 5 years ago

      I have a pretty fast system… GTX770 video card which screams, 1T Samsung SSD, 16GB system memory at 2666Mhz. CPU is a Intel i5-4670. Quick, but I think since its the SATA 3 ports at 6GB seems to be the bottleneck now.

      Once we get a CPU out that supports DDR4 at something above 3200Mhz, and the new PCIe based SATA ports on it, that I will start thinking about doing some upgrading. Motherboard, CPU and SSD.

        • localhostrulez
        • 5 years ago

        But how often do you hit the limits of your SSD? I can bring my 840 evo (on SATA 3/6gbps) to its knees by consoliating VM snapshots, but that’s about it (purely SSD/HDD-based task). And even that is pretty fast. (It was slow as hell on hard drives.)

        Heck, I have an HP 6530b here, with a 2.53GHz Penryn C2D, 4GB DDR2, 4500MHD, and some basic 160GB hard drive from 4 or 5 years ago. Retired machine from work. And even that feels decently fast for the things most people do. Granted, it could use an SSD, a cooler running CPU, more battery life, and a better GPU (although the 4500 can at least fully decode H.264, something that I had issues with on X3100’s), but with the exception of the HDD it isn’t exactly slow for general internet use. That machine works fine for me as a spare/testbed, and will probably be upgraded only if I get something decently better that’s retired from work. As for the main machine, I usually upgrade if my existing equipment is feeling antiquate, not if something higher spec’d comes out. My wallet feels a little better that way…

    • Phaleron
    • 5 years ago

    The time is takes for the raid array to initiate

    • Anovoca
    • 5 years ago

    A small little bone called the carpal tunnel.

    • hubick
    • 5 years ago

    I just plugged a new 40″ 4K TV into my 7970 (Catalyst drivers under Fedora 19).

    Firefox really doesn’t like windows larger than half the screen before everything starts to crawl.

    I can get 1080 videos to play full-screen, but I tried to play the 4K Elysium trailer and it glitched and audio dropped out, etc. That’s under Totem. I got a black screen and no sound from Flash on 4K Youtube, and VLC wouldn’t even give me a picture at all when maximized.

    I haven’t even tried gaming.

    Thankfully, Eclipse works great, which was really my motivation for it.

      • Pancake
      • 5 years ago

      Praise Eclipse – it’s kind of the home where my mind lives. It’s why I’ve got 2 Ultrasharp 3011’s – one for Eclipse, one for everything else.

      Oh, to make it relevant to discussion. i5-3570, 32GB RAM, 1TB SSD, NVidia 9600GT. One of these things is not quite as up to date as the others. But I don’t care. It drives both screens, uses little power and I only care about text editing and 2D graphics stuff.

    • willyolio
    • 5 years ago

    money.

      • chuckula
      • 5 years ago

      Sorry, Metonymy beat you.

    • USAFTW
    • 5 years ago

    Consoles.

      • alloyD
      • 5 years ago

      I wish I had more upvotes to give!

      • chuckula
      • 5 years ago

      Needs to be upvoted to a tie with the “money” line below.

      • THE RAVEN
      • 5 years ago

      Its always the Goddam consoles…

    • ZGradt
    • 5 years ago

    I’d like some more CPU for quicker encodes. Everything else seems pretty much instantaneous.
    Network speed could be improved. If you’re one of the lucky few with a GB fiber ISP, you’re pushing the limits of your LAN. That seems crazy if you think about it, but 15 years ago the LAN we were gaming on was 10MB shared (hubs, not switches). Latency is a whole other discussion tho. It bugs me that we’ve been stuck at 1GB for so long that ISP’s have actually caught up. Wireless bugs me that most devices are still 2.4Ghz N 150MB.

    Memory or storage? Bah. If I need it, I can just add more. I also have a fileserver for mass storage, and I can’t remember having to swap memory to disk since like 2006.

    • dmjifn
    • 5 years ago

    At work, this week, it was storage. Not speed – I’m almost never waiting on my work PC. Started with an OEM liteon 128GB SSD, had to restore some SQL Server DBs to it, and ran out of space -> got upgraded to a Samsung 840 Pro 250GB. Installed VS2013 and SS 2012, then restored the DBs again, ran out of space. Supposedly I’m getting an 840 EVO 500GB next. Lost about 12 hours doing the disk space shuffle so far this week. Not sure it’s worth it for the company to cheap out on the parade of small capacity drives.

    </rant>

    • ronch
    • 5 years ago

    My apps are bottlenecking my FX-8350. Lazy programmers!!

    • slaimus
    • 5 years ago

    Anyone remember when “Audio” was a legit bottleneck? With Windows Vista removing hardware acceleration from DirectX, it has really turned computer audio into the lowest common denominator. The very good Analog Devices integrated audio is all but gone from motherboards, and everything is now Realtek.

    I am still using my X-Fi Platinum from almost 7 years ago, probably the best $75 piece of computer equipment in terms of utility for me. Still nothing matches the front I/O ports with MIDI in/out, optical in/out, 1/4″ analog in/out, and headphone amp.

    • RickyTick
    • 5 years ago

    My Caviar Black 1TB is most certainly the bottleneck.
    I see no reason why my i7-950 and GTX670 can’t do everything I want it to do.

    • Thrashdog
    • 5 years ago

    My personal machine is beefy and fairly well-balanced. The SSD and RAM do a good job of keeping the 4670K fed, and the CPU doesn’t hold back the GTX 780. At work, though… I’d say we need SSDs more than anything else, and then RAM, and then CPU. Trying to do heavy-duty workstation tasks on an outdated laptop just isn’t cutting the mustard for the majority of my users.

    • lycium
    • 5 years ago

    Compute power, even though I’m using OpenCL on CPU, iGPU and GPUs. More jiggaflops, please!

    I write 3D rendering (Indigo Renderer) and fractal (Chaotica) software.

    • Duct Tape Dude
    • 5 years ago

    TR, I’m disappointed in you all. Why isn’t everyone answering [b<]cooling?[/b<] If we all ran our PCs at superconducting temperatures, we could each have over 9000 performance.

      • Takeshi7
      • 5 years ago

      Now I’m wondering how much a motherboard made out of high temperature superconductor would cost.

      • trackerben
      • 5 years ago

      Dr. Chandra is that you?

        • BIF
        • 5 years ago

        Why do people keep saying “so-and-so is that you?”

          • derFunkenstein
          • 5 years ago

          BIF is that you?

      • Jason181
      • 5 years ago

      Not sure I’d trust supercooling to “Duct Tape Dude”

        • Duct Tape Dude
        • 5 years ago

        What? It’s leak free, or about to be.

      • Ninjitsu
      • 5 years ago

      Someone very arrogantly told me once here that it’s apparently “over 8000”.

      Never watched DBZ and don’t particularly care, but just in case they come after you next.

        • Pwnstar
        • 5 years ago

        Aww, I wasn’t that bad! =(

        • DrCR
        • 5 years ago

        Over 8k? Maybe. Over 9000? There’s no way it could be that high.

      • Kharnellius
      • 5 years ago

      OVER 9000!?!!?!?!

    • slowriot
    • 5 years ago

    1. Software.
    2. User interface input devices.
    3. User interface output devices.

      • Takeshi7
      • 5 years ago

      4. User.

    • ronch
    • 5 years ago

    My Am486DX2-66 CPU is bottlenecking my R9 290X. I thought this CPU could handle everything.

      • willmore
      • 5 years ago

      Should have gone with the 80. That 40MHz FSB really helps. It better matches the DRAM timings so you get higher bandwidth and less latency!

      • ptsant
      • 5 years ago

      Making this association work would be an interesting project. A friend of mine made an ATA controller for his Oric Atmos (much older than the 486) as a pet project, but I suspect making PCIe for a 486-class computer would even more complex.

        • Takeshi7
        • 5 years ago

        PCI to PCIe bridge chip maybe. I’m not even sure the 486 chipsets had PCIe. You might be able to do it with a Pentium. That would be hilarious.

    • SetzerG
    • 5 years ago

    I chose other. My system is both plenty powerful and seriously bottle-necked, depending on application.

    I am running a Core 2 with 4GB RAM and a GTX 460 on Linux. I have an SSD.

    My system does pretty much everything I want to do except play games. For that, my bottleneck would be the OS. There’s still a ton of stuff that won’t run on Linux. Even if I had Windows, I could use a serious upgrade for my CPU, RAM, and Graphics.

      • Whispre
      • 5 years ago

      I am with you… My computer really isn’t bottlenecked anywhere apparent… with mirrored SSD’s for the system, Raid 10 with SSD cache for data storage, an overclocked 3770K, 32GB ram (no cache)… I never notice slowness other than bad applications from time to time.

    • mcnabney
    • 5 years ago

    Internet connection.
    Seriously, it is 2014. 15M/0.5M is a travesty. Hasn’t gone up in 10 years, but the bill sure has.

      • superjawes
      • 5 years ago

      But AT&T and Verizon say that you only need 4Mbps!

        • Omniman
        • 5 years ago

        My hell is 2M/1M as the fastest I can get until next summer. In home fiber is 1000 feet down the road on one side and 1500 feet on the other but the contractor they hired to put the lines on the poles took some shortcuts and bypassed an intersection. They told me it would be $30,000 to run the 2,500 feet of fiber line to finish the road.

      • My Johnson
      • 5 years ago

      Cox recently doubled the speed for us.

        • Ari Atari
        • 5 years ago

        Same here. We are the lucky ones my friend.

        • culotso
        • 5 years ago

        Same with Charter a while back: their low speeds now seem to be 60mbps/down. I wish the upload speeds would catch up though.

          • patrioteagle07
          • 5 years ago

          I am in broken charter territory… Houston goes to Timewarner after it merges with Comcast (Charter gets other better territory in the trade)…

          Because it will soon^tm not be Charter’s problem they will not fix the N. Houston infrastructure. My ping looks like this… [url<]http://www.pingtest.net/result/105599452.png[/url<]

      • Andrew Lauritzen
      • 5 years ago

      This. x1000.

      • Ninjitsu
      • 5 years ago

      My problem isn’t the speed, it’s the 30GB cap.

        • Jason181
        • 5 years ago

        You probably used half your allocation just posting that comment.

        • jessterman21
        • 5 years ago

        Now that is straight-up awful ๐Ÿ™

          • Ninjitsu
          • 5 years ago

          Yeah, our household does about a GB per day, when I’m strict with myself. :/

        • squeeb
        • 5 years ago

        Yup. If you’re a Crapcast user, coming to a city near you.

        Please google…save us..

      • Krogoth
      • 5 years ago

      There are several real-world factors behind this.

      It is not entirely the fault of some smoky backroom where executives of ISPs are conspiring against the customer.

      The market demand for higher bandwidth speeds for the mainstream market isn’t there outside of niches. It takes significant capital and time to upgrade the existing infrastructure in Canada/USA. ISPs (who want to upgrade) have deal with local politics (NIMBY). The lack of competition and strong-arming on certain ISPs doesn’t help either.

      Despite these obstacles, the connection speed on average in the USA/Canada has been steadily improving. It used to be that 10Mbps was considered to be ultra high-end a decade ago and now it is mainstream package for most suburban/urban areas. 100Mpbs or more is no longer executive to dedicated, business-tier plans.

        • Scrotos
        • 5 years ago

        I think you’re wrong. In every instance that municipal broadband has become reality, the telco and cable incumbents have both raised their speeds and lowered their prices to become competitive. There’s a reason that they lobby so hard in city and state legislatures to block the cities from offering their customers a reasonable alternative. In every place the bandwidth is available for cheap, customers flock.

        If there wasn’t demand, why would the cable and telco ISPs block Netflix traffic? They claim it is too large a load on their network, surely not just a money grab and attempt to promote their own on-demand offerings, right?

        Money a problem? I don’t really buy that:

        [url<]http://arstechnica.com/business/2014/07/verizon-nearly-doubles-quarterly-profits-after-buying-verizon-wireless/[/url<] [url<]http://arstechnica.com/business/2014/04/rolling-in-it-comcast-profited-1-9-billion-in-first-3-months-of-2014/[/url<] You look at WSJ or other places, they are up 16% and 15%, respectively. The US market has gone to crap since 1996: [url<]http://en.wikipedia.org/wiki/Telecommunications_Act_of_1996[/url<] Practically overnight thousands of ISPs went out of business and for what? We pay tariffs on our broadband to subsidize the telcos in providing upgraded service to rural areas and that isn't happening. Hell, they are busy screwing the people who got flooded in NJ by killing all the copper. I like fiber and wireless just as much as the next guy, but if you're getting regulatory breaks and subsidies, you better give something back. My heart isn't really breaking for these poor ISPs who have to "deal with local politics". I can't get a competitor for fiber for my business because the local telco strongarmed the building into only accepting their fiber. Where's the local politics causing them problems there? Seems like it's causing me and their potential competitor problems.

          • Krogoth
          • 5 years ago

          You are mixing up demand for broadband with the demand for ultra-high speeds. There’s demand for bandwidth connections, but the demand for having ultra-high speeds only exists with businesses and a vocal minority. The masses don’t care enough to make a fuss over it. Business would have a *need* for it are more than willing to pay for it. Vocal minority don’t like the price tag and contracts involved it.

          You seem to fall to understand the logistics involved in the last mile for most of the USA/Canada. It takes time (years) and capital to lay down all of the fiber on top of the aging infrastructure. Local politics is a big problem for both sides. If you want to upgrade existing infrastructure. In most places, you have to get a permit to do so. If part of the upgrade plan involves putting up stations or wiring across somebody’s property. They are going to raise hell over it. Wireless carriers have been running into problems with celluar towers being *ugly* and ruining property values along the NIMBY crowd. On top of that, tou have incumbents would will do everything in their power to make it difficult for competitors and newcomers.

          Most of the ISPs in the 1990s died because they were victims of the dot.com boom-era not because of 1996 Telecommunication act.

          Government subsidies are the reason why wonderspots like some areas in EU, Japan and South Korea exist. Their governments proactively handle all of the infrastructure work and they *lease* out it to companies who just have to pay a license/permit to use said infrastructure.

          Netfix issue has nothing to do with insufficient bandwidth or networking issues. It has more to do with old media and new media trying to screw each other over. “Net Neutrality” was nothing more than a gentleman’s agreement made back when internet was young and ISP didn’t want to screw each other over. This was thrown out the window as soon as video streaming become commonplace and starting competing with existing cable and satellite providers which some of them operate their own internet service via DSL/Cable. Netfix and its kin were running off the same line as its ISP competitors but didn’t have to pay for infrastructure and maintenance costs. It is natural that ISP competitors with cable/satellite services would try to find ways to have competitors pay or curtail this.

          In spite all of these problems. The average connection speed in the USA/Canada has been steadily improving and availability is existing more and more rural areas. The loudest voices that demand more speed are late-comers who are ignorant for what it was like back in 1990s and before. if you wanted broadband internet. The only choice was a dedicated T1 line (still not cheap) until DSL/Cable started to come out around 1998-1999 in limited areas. When you fast-forward, five years later. Cable/DSL became more commonplace and a few more years later. You had the explosion of new media in the form of streaming video services. This pushed the demand for increasing bandwidth in most customer-tier plans.

            • Scrotos
            • 5 years ago

            Most of the ISPs died because the telcos raised their wholesale rates like a mofo once they got out of under the governmental thumb.

            Streaming video is a mass-market consumer thing now, not “ultra high speed” that only businesses need. I had Comcast at 50 Mbps and it had problems streaming HD video, not looking at Netflix even, just other streaming sites.

            It certainly isn’t helping the consumer when the companies providing infrastructure also provide services that compete with companies that don’t have their own infrastructure. i.e. Comcast’s on demand versus Hulu versus Netflix, etc. Going to back to your “gentlemen’s agreement” in regards to net neutrality, I’m paying for an internet connection, not for the provider to QoS the hell out of my access to make their own offerings appear favorable. I’m no idealist, I just want what I pay for. At least with toll roads you know what you’re getting when you get on the road. With the incumbent telco and cable monopolies, who knows. You’re throwing money at them for whatever they want to do and you have limited options and no real recourse for alternatives.

            I think you’re not quite getting it. You talk about prices for entry into markets being high because of local politics and incumbents blocking competitors. Well, the main roadblock to higher speed and lower prices ARE the incumbents. From getting state legislature to pass laws to block municipal broadband to sending out warning letters to local governments when a competitor tries to enter the area (CenturyLink/Qwest/US West, actually an incumbent telco, is trying to deploy fiber in Denver and Comcast is doing their best to block them as an example close to my home), time and time again we see that actual competition lowers prices and increases service levels.

            Any municipal broadband deployment has shown this to be true. Sucks for the cable customers just out of the municipal service areas who have rates 2 to 3 times as high for the same or lower speeds.

            When looking to upgrade our leased line from Denver to Lincoln, thank god for this:

            [url<]http://siliconprairienews.com/2012/02/lincoln-announces-plan-to-install-downtown-fiber-grid/[/url<] I contacted anyone in the area, prices ranging from $1200/month for 1.544 Mbps to $3500/month for 45 Mbps. Cable companies were a few hundred but horribly asymmetric. We're getting a fiber buildout because the city had a partnership with another company that got it pretty close, they agreed to run some more a few hundred feet so we could get access and dump our leased lines for a VPN. If the city hadn't taken the initiative and gotten fiber rolled out, as a customer I'd be SOL and shelling out a ton of money for sub-par speed and service. Thankfully Lincoln is a tiny market, not like Denver, and the incumbents there (Windstream who bought Alltel's non-wireless telco, and Charter and Time Warner) didn't try to block them. Or weren't successful in the attempt. $1200/month for a T1 in downtown Lincoln. Seriously. That might have been ultra high speed 20 years ago, but not since the mid 90's when I was beta testing AOL's DSL offering (1.5Mbps/64Kbps, craaazy) and then got 1.1 Mbps SDSL to host Quake 2 servers from my house. I fully understand that last mile deployment takes time and money, but like I said, big telco wants to deploy that locally and they are getting blocked by big cable. Not a time or money issue there. Lotta the new housing developments have conduit already laid so you can just push out a line from the CO. When I dropped cable and got DSL, that's what the guy did. Heck, it even had a pull line already waiting. For older places, sure, more of a problem but even my parents in a development from the 70's have FIOS deployed to them.

            • Krogoth
            • 5 years ago

            50Mbps isn’t need for streaming 1080p content. You can get away it with 5-10Mpbs downrate. The problem is the server-end not the client-end. It seems people quickly forget that servers and datacenters don’t have unlimited bandwidth access and they also have to pay for their connection speeds (which are even more expensive than most customer-tier plans). QoS policies are a common practice. It is no surprise that you may run into streaming issues with videos that have a high user demand.

            Politics is a big reason why the last mile is a PITA. NIMBY can be very powerful in some areas and it is no surprise that incumbents are using local connections(pun intended) to make it difficult to newcomers to come in. On top of that, you a large number of people who are just apathetic and don’t care enough about it to make a fuss.

        • superjawes
        • 5 years ago

        I think you’re greatly misjudging the demand for higher bandwith. Maybe people don’t consciously “need” it, but the number of streaming services out there is going up, the new consoles are focusing more on digital distribution, and the existing services are only getting better.

        10Mbps might be “enough” for most people right now, but I think we’ll be running into bandwith issues sooner rather than later, and ISPs should be doing more to increase network capacities than they currently are.

      • travbrad
      • 5 years ago

      In the last 10 years my connection has gone from 1.5mbps/256kbps to 30mbps/6mbps. If you went back 12 years the fastest internet connection available to me was 56k. I guess I’m just lucky?

      It’s not like there is any competition here either. Comcast has a strong local monopoly in this area. The closest thing to competition is a DSL provider with about 1/5th the speeds, worse pings, worse reliability, and customer service as bad as Comcast (hard to believe I know)

      My bill has also gone up though, of course.

      • humannn
      • 5 years ago

      I think the upload of 0.5M is a travesty, yes, but 15M is fine for just about everything. You can stream 1080p, web pages load in a snap. The only concern is downloading huge files but I don’t do that often (and when I do, I just have to go do something else for 12 minutes instead of 6 minutes).

      • MEATLOAF2
      • 5 years ago

      I get 50Mb/10Mb, guess I’m lucky to live in an area with decent infrastructure, can’t say the bill is low though.

      • chยตck
      • 5 years ago

      Where do you live?
      I get 50/5 for $10/mo in central texas.

        • derFunkenstein
        • 5 years ago

        I get 50/10 but it’s more like $60, which I felt was reasonable considering I could dump Comcast at the same price.

    • atari030
    • 5 years ago

    Must….buy…..SSD!

    The old spinning platters just aren’t spinning quickly enough ๐Ÿ™‚

    • MadManOriginal
    • 5 years ago

    Comcast.

    Or, for the occasions when I do heavily multi threaded stuff (encoding, especially video at high quality)…the CPU. But those occasions are rare enough that my oc’d 3570k is acceptable.

    • Takeshi7
    • 5 years ago

    I couldn’t decide between my very old 500GB system HDD, or my very old GeForce 8800 GTS’s. I went with graphics.

      • willmore
      • 5 years ago

      You need an “All of the above” option.

        • Meadows
        • 5 years ago

        Hah! But yes.

      • Rakhmaninov3
      • 5 years ago

      Sounds like your problem is the same as mine: oldness.

    • Tristan
    • 5 years ago

    Pls, add new option: None. I have balanced system without botlenecks.

      • jessterman21
      • 5 years ago

      *without spell-check.

      FTFY

    • peekpoke
    • 5 years ago

    Graphics, Memory, Storage have all been improving at a steady rate.

    But CPUs hit a WALL about 5 years ago with the CORE2. A middle of the road blah CORE2 CPU from 6 years ago is 50% the speed of the fastest CPU you can buy today in app benchmarks.

    Unless there are some major breakthroughs in software technology that allow apps to suddenly make use of 8, 16 or more cores, we are dead in the water for desktop performance.

    Meanwhile, portable CPUs are catching up as the desktop crawls along…..

      • shaurz
      • 5 years ago

      Mobile SoC CPUs will catch up with desktop CPUs in a few years. Desktop GPUs will still have the thermal advantage due to trivial parallelisation, but CPU is not benefiting from more transistors any more.

      • jihadjoe
      • 5 years ago

      Probably because desktop finally hit a thermal limit. A lot of the early performance jumps were big because many of the tricks that make processors fast today hadn’t yet been figured out, and since CPUs started off sipping so little power there was a lot of headroom to grow.

      Generational increases were also much bigger, but that was mostly because CPU generations lasted much longer than today. Whereas Intel and AMD now update their product lines with updates that give small 5-10% gains, they used to wait and consolidate everything until they could make a vastly faster chip. It’s crazy how AMD could be 5 years late with their Am386, 4 years late with the Am486, and were still relevant to the market.

      • Theolendras
      • 5 years ago

      Nehalem and Sandy were good improvements on performance front, 20% to 30 % each in many bench, one for introducing the first integrated memory controller and turbo, the other from too many stuff to mention being the first complete redesign since the PIII… Then, there is a real plateau, so 5 years I feel is somewhat exagerated.

      • UnfriendlyFire
      • 5 years ago

      On the flip side, the i7-720QM and the i7-4500U has similar multithread performance (single-threade obviously going to the 45000U), but the first one is rated at 45W TDP and the second one is rated at 15W TDP.

      Similar multithread performance, massively lower power consumption.

      • freka586
      • 5 years ago

      That would depend largely what you use your desktop for. Gaming, yes. Productivity, no. I think an 18 core/36 thread CPU with 45MB of cache is pretty fancy.

      Portable and server/workstation has seen nice improvements for each generation. For gaming and casual desktop (“browsing”) needs, a quad or even dual core from a few generations back goes a long way. There has simply not been a need or push for bigger improvements.

    • UnfriendlyFire
    • 5 years ago

    8 hours of WiFi battery life can quickly disappear, especially if you set the battery’s minimal operating range at 30% instead of 5% extend battery lifespan but at a 25% capacity penalty.

    Undervolting the CPU and setting the WiFi card to lowest performance/power helped extend battery life close to 9 hours, but even then it isn’t enough.

    EDIT: Software is also a major bottleneck. Lots of games and other software still heavily use 1 or 2 cores, which meant that the unlocked Pentium was able to match quad-core CPUs on certain tasks.

    And that’s really sad.

    EDIT2: In terms of boot-time, the POST that occurs at the beginning of the start-up will become a bottleneck once M.2 and SATA-Express SSDs become common.

    EDIT3: Power consumption by displays is also going to be greater than the CPUs on tablets and laptops.

    • csroc
    • 5 years ago

    At home I’m not significantly bottlenecked anywhere. My i7 3770K, 32GB RAM and 670FTW do what I need on my desktop.

    My old laptop is a bit more of a sad case and my first gen Surface Pro is not hugely dissimilar but the SSD is a big improvement. Eventually I’d like to step up to an i5 mobile device with 8GB RAM and SSD. I’d say screen size or vertical resolution on my Surface Pro 1 are bottlenecks.

    At work I need a third screen sometimes but I’m most bottlenecked by the difficulty I am having in getting a 64 bit machine. I’ve got my 32 bit system maxed out and am constantly fighting memory hogs. If I could get a 64 bit image and 8GB of RAM productivity would be higher and I’d be less frustrated.

    • kvndoom
    • 5 years ago

    Console exclusives.

    • bfar
    • 5 years ago

    I’d go with display resolution on this. I don’t miss the 25kg CRT taking up half my desk, but how I miss being able to change resolution/refresh rate at will…

    I’m on 1080p at 23′ and I can’t afford to change that any time soon…

    • maxxcool
    • 5 years ago

    most of all x86 is the bottleneck. sure we have come along way with micro and macro ops.. but we are still spending WAY to much time taking code that was compiled, then uncompiling it, then fusing it to make risc like ops, re-ordering them, then executing them an than tracking the execution patterns to try to guess what data and instructions to keep cached.

    >>**#$%^ing horribly awful design**<<

      • ronch
      • 5 years ago

      Exactly the reason why I don’t believe people when they say RISC ISAs have no inherent advantage over x86, or x86 is no worse than ARM or MIPS.

        • Takeshi7
        • 5 years ago

        x86’s inherent advantage is that Intel is years ahead of any competitor’s Silicon fabrication.

          • blastdoor
          • 5 years ago

          That’s not an inherent advantage to x86, it’s a confound that masks x86’s inherent disadvantages.

          • ronch
          • 5 years ago

          We’re talking strictly ISAs from an academic POV.

            • exilon
            • 5 years ago

            From an academic POV, that’s still wrong.

            [url<]http://www.extremetech.com/extreme/188396-the-final-isa-showdown-is-arm-x86-or-mips-intrinsically-more-power-efficient[/url<] The x86 tax is a myth propagated by those who have never cracked open a textbook on VLSI or computer architecture, like OP here.

            • blastdoor
            • 5 years ago

            I’ve seen that link posted as gospel many times, but I don’t buy it. It’s still correlational research and I don’t believe they can really address the confound that exists between x86 and all of the resources advantages that Intel has. In other words, I think their findings are biased (not in the “they have a vested interests and are lying” sense, but in the “the expected value of their parameter estimate does not equal the true parameter, despite their best efforts” sense).

            The only way we can ever hope to know the answer to this question is if an equal amount of time and money is poured into a competing architecture. Up until recently, nobody has come close to putting as much time and money into a competing architecture as Intel has put into x86. But Apple might come close with the A-chips. Of course, there’s another confound there — Apple has more control over the software stack than Intel has, so that could bias things against x86.

            • kamikaziechameleon
            • 5 years ago

            “The only way we can ever hope to know the answer to this question is if an equal amount of time and money is poured into a competing architecture. Up until recently, nobody has come close to putting as much time and money into a competing architecture as Intel has put into x86. But Apple might come close with the A-chips. Of course, there’s another confound there — Apple has more control over the software stack than Intel has, so that could bias things against x86.”

            I think that is what the article says:

            “The RISC vs. CISC argument shouldโ€™ve passed into history a long time ago. It may still have some relevance in the microcontroller realm, but has nothing useful to contribute to the modern era. An x86 chip can be more power efficient than an ARM processor, or vice versa, but itโ€™ll be the result of other factors โ€” not whether itโ€™s x86 or ARM.”

            So yeah.

            • chuckula
            • 5 years ago

            Is x86 an ugly architecture?
            Sure.

            Is the modern x86 with 64-bit addressing, AVX, etc. etc. easily the most powerful architecture that’s available to the general public? Absolutely, and ARM can attest to this fact since basically every single change in the ARM ISA in the last 5 years has been an attempt to copy features of modern x86.

            It also comes with the bonus of backwards compatibility with 35 years of software at the cost of a small corner of a silicon die.

            • Ninjitsu
            • 5 years ago

            Apparently the new Xeons have a 486-class chip on die just for [i<]power management[/i<].

            • Takeshi7
            • 5 years ago

            If you’re just talking ISAs from an academic POV, I vote for MIPS. It was developed from an academic POV, and it has twice as many registers as ARM. Registers are power.

            • blastdoor
            • 5 years ago

            It would be interesting to do an anonymous poll of some of the best chip designers, asking them which ISA they would use to make the best CPU they can in terms of performance/watt if somebody came to them with $5 billion, 5 years to come up with the design, and guaranteed access to the best fabs in the world.

            edit — and a good follow-up question might be, “and how much of a difference do you think the ISA choice would really make in terms of the ultimate performance/watt of the final product?” (because it could be that they would all pick something other than x86, but then admit that it doesn’t actually matter too much)

            • Jason181
            • 5 years ago

            Academia is overrated. Many of the professors are teaching in lieu of actual work experience.

            • chuckula
            • 5 years ago

            Academia gave us the stunning commercial success stories of the Itanium and iAPX 432 (not too many of you have heard of that one!) Not to mention other “successes” like the Cell (quietly killed after the PS3 launched) and VLIW (Intel i860 anyone? You think they would have learned their lesson but they still made Itanium!)

            The list goes on. About the only processor design that’s really “academic” that has enjoyed any form of success was MIPS, but the world has passed RISC by. Just go and look at the latest “RISC” ISA from ARM and then go back and look at what RISC actually stands for.. hint it’s not “anything that isn’t from Intel”.

            • patrioteagle07
            • 5 years ago

            Itanium wasn’t bad it just didn’t progress fast enough to keep up with x86. AMD used VLIW for 5k and 6k of gfx cards. Mips is router material. I think everything you mentioned had potential but getting something new to push x86 out of dominance is hard.

            • Jason181
            • 5 years ago

            If Itanium couldn’t succeed with Intel’s backing, it wasn’t ever going to succeed. Unrealized potential makes it…. a failure.

            Part of a good design is creating a product that can reach its potential within the confines of reality. That is what academia fail at: the translation from theory to reality.

            • willmore
            • 5 years ago

            Are you kidding? MIPS is horrible. It’s used in education because of H&P and little else.

            You want a pretty architecture? Alpha.

      • Duct Tape Dude
      • 5 years ago

      The additional “problem” though is the amount of money and time spent developing and refining an architecture. We don’t have any pure RISC chips that can compete with x86’s performance, compatibility, and price (pick two, etc).

      I agree with your point, I’m just saying money and time are what make a CPU these days. x86 is several billion dollars and man hours ahead of anything else.

        • maxxcool
        • 5 years ago

        Yeah it is always a issue of money and making the pay off and incentives big enough to switch.. hence itantic’s issue.

        juts so horribly inefficient to reinvent the wheel 3.5x per clock cycle.

        • blastdoor
        • 5 years ago

        You are absolutely correct that time and money are huge. Really huge.

        But I think the gap is closing. The iPhone launched in 2007, and since then Apple has spent a lot of time and money both on chip design and funding the construction of fabs. For each of the last four years, Apple has ticked and tocked at the same time, upgrading both fab process and core design. Apple has not caught up to Intel yet, but they are closing the gap.

        It may seem odd to some that I focus on Apple rather than the foundries, but Apple matters far more than the foundries because Apple is the source of both funding and design. Also, competition for Apple’s business spurs the foundries forward unlike nothing else.

        I predict that next year, Apple will tick and tock yet again, with the A9 having another core upgrade on par with IB to Haswell, and fabbed on a finFET process (whether it’s TSMC or Samsung doesn’t really matter). Yes, I know that Intel’s 14nm process will still be ahead. But the point is that the gap is closing.

        I bet that something unprecedented will happen when we get to 10nm. I bet Intel will get there first, but that Intel will still be on 10 nm when Apple also gets there (that is, there will be a period of time, probably longer than a year, when both the best Apple and Intel have to offer are fabbed at 10nm). After 10 nm, who knows what happens.

          • UnfriendlyFire
          • 5 years ago

          Then it’s a race to find something better than silicon.

      • maxxcool
      • 5 years ago

      -3 ? haha.. what ? some people want extra heat and inefficiency ?

        • blastdoor
        • 5 years ago

        My observation is that if you question x86, you get voted down.
        If you complain about being voted down, you get voted down more.

        Now, if you really want to get down votes, criticize some aspect of how TR is run and say something positive about Apple, ideally at the same time.

          • maxxcool
          • 5 years ago

          Crapple is awesome down with x86 AND BRING ME AIRBORNE HERPES!

          • trackerben
          • 5 years ago

          Now that’s efficiency!

      • exilon
      • 5 years ago

      I still don’t understand how people like you confuse out of order execution with x86.

        • maxxcool
        • 5 years ago

        whats not to understand. Build an ISA that does not need extra cpu cycles to reorder code to make it more efficient.

          • exilon
          • 5 years ago

          Almost every high performance consumer CPU is out of order. Has been for almost 25 years. That includes all high performance ARM offerings.

          They all do what you think only x86 CPUs do.

            • maxxcool
            • 5 years ago

            ooOE is fine. But spending large blocks of silicon to take “optmized code” and take it apart, then refuse it into new native code blocks then take those blocks and make LI code to then execute seems pretty damn inefficient. Granted I don’t build these things but I don’t take a TV dinner home, then separate the sauce, the scary-might-be-meat product and re-engineer the molecules in both to turn it into carrots.

            • blastdoor
            • 5 years ago

            Another analogy… what’s more efficient, to write a report for an English-speaking audience in Japanese, have it translated into English, and then send it to an editor, OR…

            write the report in English send it to an editor?

            Sure, in both cases you’ll send it to an editor to tighten up the language and rearrange words. But wouldn’t it be noticeably more efficient if you start off writing in English to begin with?

          • Andrew Lauritzen
          • 5 years ago

          > does not need extra cpu cycles to reorder code
          You really don’t understand CPU hardware do you…

            • maxxcool
            • 5 years ago

            ok so the decoder and reorder blocks run their own scheduler. they simply should not exist.

            you really don’t get the point do you ?

            • exilon
            • 5 years ago

            You still are confusing x86 with out of order execution. Please stop.

            • Andrew Lauritzen
            • 5 years ago

            I’m pretty sure I get the point…

            No offense, but fundamentally anyone who thinks ISAs are particularly relevant/important these days probably doesn’t understand how hardware works.

            • blastdoor
            • 5 years ago

            If the ISA doesn’t matter, why do companies like Intel, AMD, and ARM keep tweaking their ISAs? Is it because they don’t understand how hardware works? Maybe you should go enlighten them.

            • exilon
            • 5 years ago

            What are you even trying to argue here? ISAs constantly get tweaked through new extensions to support wider processing and features. ISAs really don’t matter… because ISA stands for instruction set architecture. It’s the execution of ISAs that determine performance and wattage beyond a point.

            You certainly would not want a low performance 50mW microprocessor to be running x86, but once uArch designs get past a certain performance level, it doesn’t matter whether its CISC or RISC. OoOE implementation, issue width, cache size, and other performance design choices are so much more relevant to performance/watt.

            • blastdoor
            • 5 years ago

            I’m questioning the assertion that ISAs don’t matter. To me that sounds a lot like the assertion that design itself does not matter, which I think is wrong.

            Computers exist to do things that we want them to do. From the UI down to the ISA, it seems to me that the challenge for designers is to come up with a way to efficiently communicate user intent to the computer, and then to have the computer efficiently execute that intent. At any given step in that process, there are many different options available to communicate intent. And these options all involve tradeoffs. A good design is one that chooses the right mix of tradeoffs given the constraints faced and the objectives one is trying to achieve. I have yet to experience anything in life where there are not tradeoffs and where design does not matter. But it sounds to me like that is what some people are saying when they say that “ISA does not matter”.

            When people talk about ARM vs x86, the issue is whether one of those ISAs as they exist today strikes a better set of tradeoffs in communicating intent to a processor in a given context (for example, a smartphone). It’s possible that they are equally well suited for that purpose. It’s also possible that they are not, but that the huge number of advantages that Intel has (best fabs in the world, best designers in the world) more than make up for any disadvantage inherent to x86 for this context.

            So, in a nutshel, I’m arguing that ISA probably matters. I’m not arguing that ARM is better than x86 or vice versa in any global sense. I’m just arguing that the choices that people make in selecting, designing, and modifying an ISA matter. And more broadly I’m arguing that design matters.

            • Jason181
            • 5 years ago

            Perhaps a better way of saying it is that ISAs these days are constantly being extended to meet the current needs of the market, so if you pick an ISA that’s doing that, the critical part really is designing hardware to implement it. No matter what ISA, it all comes down to ones and zeroes at the hardware level.

          • the
          • 5 years ago

          Intel did that. It was called Itanium and was an utter failure when it came to performance.

      • maxxcool
      • 5 years ago

      -8 ? COMMON? WHAT DO I NEED CAPS ?!

        • Meadows
        • 5 years ago

        Try -21, fool.

          • maxxcool
          • 5 years ago

          Looks like -24 fool

        • Jason181
        • 5 years ago

        Took me a bit to realize you were actually trying to say “Come On?”

        Since pretty much everything you describe is done in hardware, and that hardware takes a miniscule amount of the die (excluding out of order execution, which is to enhance performance, and is not a part of the x86 ISA), I think you’ll be hard pressed to make the case that it’s a “bottleneck” without a much better explanation of your opinion than you’ve given here (in the original comment and replies).

        It would be a bottleneck if the decoding and issuing of instructions was slower than the rest of the system, but it’s not. I think you’d probably do better to argue that x86 has too few registers and/or the instruction set is too large, where microcode is required for certain operations.

      • Firestarter
      • 5 years ago

      Ok, where’s the CPU that has better singlethreaded performance? And can we buy it?

        • maxxcool
        • 5 years ago

        Right under your nose.

        • the
        • 5 years ago

        Actually the chip with the best single threaded performance in the general case is the POWER8. Depending upon what is tested, Intel’s Haswell can come out on top but that’s expect as one explores more niche cases.

          • tipoo
          • 5 years ago

          Actually if we’re strictly speaking of a single thread and not a core, the Power8s performance isn’t impressive at all. It relies on multi way SMT per core for peak performance, so you’d have to talk about multithreaded performance to talk about just one core.

          Each core can handle a whopping 8 threads, so each thread actually has rather weak performance relatively, but all 8 together have huge throughput.

          [url<]http://www.itjungle.com/tfh/tfh060214-story01.html[/url<]

            • the
            • 5 years ago

            Actually your link supports the conclusion that POWER8’s single threaded performance is rather impressive. Note the nice 50% speed increase from POWER7 to POWER8 in SMT1 mode? POWER7 is roughly on par with Ivy Bridge-EX in terms of single threaded performance but POWER8 leap frogs them both. And yes, enabling SMT8 only further increases performance.

      • Voldenuit
      • 5 years ago

      Downvotes are probably because the ISA is no longer the bottleneck.

      In the “old days”, the extra transistors needed to translate x86 into uops was significant, and could have been used for more cache, more logic, etc etc.

      These days, the “x86 overhead” is such a small percentage of the overall transistor budget that it is essentially negligible.

      Also, don’t forget that most RISC architecture also do a similar translation into uops these days (and have for a while).

      Complaining about x86 overhead is like complaining about the speed of telephone switchboards – the problem has been “solved”, for most intents and purposes.

      • the
      • 5 years ago

      This ‘horribly awful design’ is something that will have to dealt with by [i<]any[/i<] ISA that wants to have multiple implementations. Even RISC designs will break down instructions into micro-ops for more efficient execution and to schedule instructions. All an ISA does is define the instructions and their behavior for software. How the ISA is implemented in hardware is left open to the ISA spec. That is why x86 has choices from Intel and AMD. This also why ARM designs can come form both ARM's Cortex line and Qualcomm's Snagdragon line but run the same software. The concept of a single use ISA has been removed from the market decades ago due to the need to run legacy software. No commercial software vendor wants to continually recompile and distribute applications with every new iteration of a purpose built architecture. Businesses also don't want to continually purchase new software licenses as they upgrade/replace hardware. Sure, the x86 architecture has some baggage but Intel and AMD have put in tremendous effort to work around them. The great argument against x86 typically cites the decode units. The presence of large decode units mostly impacts die size and the additional energy to run them as actual throughput is determined by the execution resources and the scheduler's efficiency in using those execution resources.

      • derFunkenstein
      • 5 years ago

      I award you no points, and may God have mercy on your soul.

      edit: [url<]https://www.youtube.com/watch?v=5hfYJsQAhl0[/url<]

      • maxxcool
      • 5 years ago

      I come back from the weekend to find this, /warms my soul/ Hahaha -31… and a minor crapstorm. nice:)

    • ahmedabdo
    • 5 years ago

    Generally speaking, if we consider a high end system, then I think the software is currently the biggest bottleneck! Programs and games these days rarely take full advantage of the power and versatility of today’s systems, imo.

    • DPete27
    • 5 years ago

    1) Every gamer could always use a better (or another) GPU. (this was my personal vote)
    2) The majority of non-enthusiast PCs in the world are still severely bottlenecked by having the OS/programs on mechanical hdds. So, while most TR readers probably have a SSD by now, if I was voting for “the world” I’d go with storage.

    • Milo Burke
    • 5 years ago

    With 4k nearly here for the gamer, we need big improvements in graphics.

    Displays are tricky because you can’t have a reasonably affordable display that is high res, high framerate, and has great color reproduction.

    SSDs are bottlenecked by SATA3 and by a lack of viable options for M.2. Also, SSDs are still too small without paying a lot of dough, and the lower capacity SSDs can’t keep up with the larger capacity ones.

    We’re limited that most computing, especially games, are too focused on a single thread’s performance.

    Software scaling is also a huge issue. The industry needs to get on that.

    And RAID5 is becoming obsolete, with undetected errors now likely (based on drive size and error probability) that causes rebuilds to fail. We need more robust error protection/recovery, and/or more elegant RAID1 products.

    And I’m still waiting for the NUC or pico-PC or whatever that is sufficient, affordable, and reasonably elegant.

    However, we’ve made incredible strides in battery life for laptop, smartphone/tablet computing prowess, and server core count.

      • Krogoth
      • 5 years ago

      4K gaming is just another fad. I rather wait for textures (512×512 is still norm!), animation and model complexity to catch-up before worrying about increasing resolution.

      I/O throughput hasn’t been a problem for the mainstream since 2.5″ SSD devices have become commonplace and affordable. There’s no killer app in the mainstream that requires the use of PCIe/SATAe SSD cards.

      RAID is junk-food for back-up. It is meant for data availability/uptime. RAID5 is already obsolete and has been replaced by RAID6 and other solutions in those areas.

      Software scaling is a very complicated problem, suffice to say that not everything can be parllelized and there are diminishing returns.

      Display tech has been stagnated for almost a decade. G-Sync and its kin are old concepts that try to band-aid the limitations of LCD tech.

    • odizzido
    • 5 years ago

    I suppose the one I run into the most is not having enough cores. I have four, but I think I would need 6-8 to run everything I want comfortably. The next one I run into is ram. I only have 8gigs, but my next system will have at least 16.

    The thing I want the most though is a new graphics card. My 5850 is old and mangled. My i5 750 isn’t that new either, but at least it’s not mangled.

    Honestly though just like Metonymy said my bottleneck is money. Nothing I have mentioned couldn’t be solved with a liberal application of money.

      • blastdoor
      • 5 years ago

      I want more cores too.

    • Prestige Worldwide
    • 5 years ago

    CPU used to be a bottleneck when running SLI on i5 750 @ 3.04 GHz.

    Overclocking it to 4 GHz did the trick, and then upgrading to i7 3820 @ 4.4 GHz did even better!

    But now, GPU is my bottleneck. I usually use 1 GTX 670 2GB, ocasionally ripping a second 670 out of my other PC when I want more power for a demanding game.

    I would like to have a nice faster single GPU, but currently high-end prices are too high.

    Interested to see what GTX 980 on GM204 in a few weeks and the real high end GM200 early next year will have to offer.

    • Ninjitsu
    • 5 years ago

    My Q8400 bottlenecks a lot of games, and takes quite a while for some tasks, like video editing.

    1080p is a bit of a stretch for my GTX 560 too, but my CPU is hit first far more often.

    • superjawes
    • 5 years ago

    I put graphics, but I really mean my display. My 7950 does really well at 1080p, but I’d like to upgrade to a 2560×1440 monitor, and that would necessitate a beefier GPU to go with it.

    • Milo Burke
    • 5 years ago

    My desktop is bottlenecked with graphics. I can’t play games with the settings I want at the resolution I want with my current card, or any card I can afford.

    My laptop (a bit older) is limited by storage speed, battery life, 2-channel only HDMI audio, and the inability to flawlessly send 1080p to my TV.

    • Metonymy
    • 5 years ago

    money

      • derFunkenstein
      • 5 years ago

      Awesome.

      • Firestarter
      • 5 years ago

      power, too

      • Aerugo
      • 5 years ago
        • Jason181
        • 5 years ago

        You weren’t old enough to care about this stuff in the 80s, were you?

          • lilbuddhaman
          • 5 years ago

          Nah a lot of us youngin’s found out on our own about all the mistakes you old folks made in the past, and since you didn’t warn us, it’s past the point of being fixed.

          Thanks

            • NeelyCam
            • 5 years ago

            Don’t give up. Start voting for socialists. Things will get better.

            • Jason181
            • 5 years ago

            It’s not our job to teach you history; it’s your job to learn it. The 80s, by the way, did not precipitate today’s problems. It was a warning that should have been heeded, but was ignored in the pursuit of easy money.

            I suspect you’re young enough to actually believe we had enough control over our government to prevent the 80s from happening. Someday, in hindsight, you’ll see that control of the government is an illusion (at the federal level, anyway), and the next generation will be blaming you for its missteps.

        • NeelyCam
        • 5 years ago

        Dumping trillions of dollars to the market just means that stuff from china gets more expensive. USA internal production cost doesn’t change (assuming materials are also sourced from USA). Price of stuff might go up, [i<]assuming[/i<] wages move up accordingly (people need to stop resisting minimum wage increases). This effectively reduces the value of both savings and debts in dollars, transferring wealth from the top 1% to the bottom 99%. That $100k mortgage used to mean that someone owes 5lbs worth of gold, but now they owe only 3lbs worth of gold. At the same time, that $10mil of savings that some fat cat has hidden somewhere in the Bahamas went from 500lbs of gold to 300lbs of gold. The "conservatives" (=1%) are spending lots of dollars to spread the message that QE is bad. And it is bad. For them. It's great for everyone else. But it's easy to brainwash masses, and the investment on spreading the message has a huge ROI as long as people believe it.

          • Jason181
          • 5 years ago

          Sorry, but minimum wage is a minimum. It’s not meant to be a living wage. It’s a foot in the door-prove your worth-wage.

          We went off the gold standard a LOOOONNNNNGG time ago. We’re cooking up a recipe for stagflation, which is very, very bad for the working class. You seem to believe you have to be rich to be a conservative, but since nearly half the nation identifies as conservative the math really doesn’t work.

            • NeelyCam
            • 5 years ago

            [quote<]You seem to believe you have to be rich to be a conservative, but since nearly half the nation identifies as conservative the math really doesn't work.[/quote<] I mean that acting "conservative" makes sense to the rich who vastly benefit from the "conservative" policies. For others, it doesn't make sense. But people aren't always logical, and they often vote against their best interests if the TV tells them to.

            • Jason181
            • 5 years ago

            This is getting deep into R&P territory.

            I am a reasonable, intelligent individual, not naive in the ways of the world. Your implication that I only believe the way I do because the “TV tells me to” is offensive and baseless.

            Do you really believe that you’re right (no pun intended) to the exclusion of every dissenting opinion, and that nobody could possibly come to a conclusion other than yours if they carefully consider the facts?

            The purpose of this post isn’t to “win” an argument, but rather to examine whether it’s even a discussion worth having.

            I couldn’t disagree more with your statements, but I don’t automatically relegate you to a blind or hapless victim of liberal propaganda. Instead, I understand that there are well-reasoned arguments for both sides, and believing differently than me doesn’t make you less of a person, as you seem to think of conservatives who are not “rich.”

            • NeelyCam
            • 5 years ago

            [quote<]Do you really believe that you're right (no pun intended) to the exclusion of every dissenting opinion, and that nobody could possibly come to a conclusion other than yours if they carefully consider the facts?[/quote<] No, I don't. [quote<]The purpose of this post isn't to "win" an argument, but rather to examine whether it's even a discussion worth having.[/quote<] The discussion about excluding or not excluding dissenting opinions could be worth having, except that you and I agree on that, so we have no reason to have that discussion. Meanwhile, trying to have a discussion about if liberals or conservatives are "right" is one definitely not worth having, unless one just likes to spend time arguing on the intertubes. Your classy answer is the best way to disarm trolls like me. Calm, reasonable, logical post without even a hint of personal attacks. Friends?

            • Jason181
            • 5 years ago

            Friends. ๐Ÿ™‚

      • UnfriendlyFire
      • 5 years ago

      What about European electricity rates?

      • green
      • 5 years ago

      saw in top comments
      made me wtf-click
      would up-vote again

        • moose17145
        • 5 years ago

        Same here. I am up vote 100.

      • Meadows
      • 5 years ago

      Pretty good upvote-to-words ratio, all things considered.

        • NeelyCam
        • 5 years ago

        Probably a record

      • ronch
      • 5 years ago

      Money really does attract everything, doesn’t it? Even upvotes.

      • Billstevens
      • 5 years ago

      this man speaks truth

      • chยตck
      • 5 years ago

      Wow, so deep.
      TR needs to invite this guy to do a podcast session.

      • Peter.Parker
      • 5 years ago

      Time!
      (Yet Another Pink Floyd Song)

      • Pez
      • 5 years ago

      Is this the most up-voted comment on TR ever?

        • NeelyCam
        • 5 years ago

        Yes.

      • nico1982
      • 5 years ago

      I had to upvote this even if I’m 9 days late ๐Ÿ˜€

      • flip-mode
      • 5 years ago

      It’s september 28 now but have another upvote (for 149).

    • chuckula
    • 5 years ago

    Given that 4K monitors are going to be the thing over the next couple of years, graphics (for gaming at least) are probably somewhere high on most lists.

    Storage is a biggie too. SSDs have been great but moar is still better there.

      • superjawes
      • 5 years ago

      Well not just 4K. 1080p has been around for so long that I’m sure people are looking for a resolution upgrade of any sort. 2560×1440 is also a great step up that would require more horsepower.

    • geekl33tgamer
    • 5 years ago

    *Looks at FX-8350* Stupid Hyper-Transport Link. Not even got enough bandwidth to feed dual Crossfire properly…

    • HisDivineOrder
    • 5 years ago

    My system seems pretty balanced, but I chose GPU because of the 2GB on my 670 SLI. I feel like that’s the next thing to require upgrading, especially with DX12 coming next year or not long after.

    • derFunkenstein
    • 5 years ago

    Definitely graphics. I have enough storage with a 256GB SSD and a 2TB HDD, plenty of RAM with 16GB, plenty of CPU with a 3570K at 4.5GHz. My GPU is a GTX 760 that I got in April and even something that beefy can get dragged down depending on what’s happening in Diablo III on totally maxed out settings at 1080p. I play with Vsync enabled and on rare occasions it’ll stutter down into what I’m guessing is the 20fps range when the environment is full of crazy effects like plague, molten, arcane turrets, especially when I’m in certain Act5 zones like when Westmarch is on fire. If I could turn back time I might have gotten a GTX 770, but as it is I’ll live with it. Not going to throw more money at another Kepler card with big Maxwell coming up.

    • homerdog
    • 5 years ago

    With an i7-3770K, 16GB RAM, Intel SSD, big hard drives and everything else, the limiting factor for me is the 2GB GTX670. But I’ll be keeping the 670 until a faster card with at least 4GB is available for $300.

      • Vaughn
      • 5 years ago

      I voted other.

      The one major item missing from the list is network connectivity.

      With most computers having SSD’s 1Gbps Ethernet ports are a bottleneck for file transfers to other machines with SSD’s in the network.

      Of all the choices listed there I build my machines to avoid those during its lifetime.

      Minus battery life of course useless on a Desktop PC.

      Lack of 10Gbps on consumer boards is it for me right now.

        • Krogoth
        • 5 years ago

        10Gbps is going to enterprise/prosumer-tier for several more years. There’s no mainstream demand for it. Gigabit Ethernet yields enough bandwidth to handle HDD transfers (home-brewed NAS boxes or poor’s man NAS boxes). If you need more network bandwidth chances are that are you using your computers for real world and probably can justify the cost associated with 10Gbps Ethernet and beyond.

        Besides, the mainstream is opting for wireless ethernet over wired ethernet.

    • ShadowTiger
    • 5 years ago

    This reminds me of how glad I am to have an ssd now!

Pin It on Pinterest

Share This