CPU vs GPU for gaming

Discussion in 'Wish Lists' started by nipzon, Mar 18, 2018.

  1. nipzon

    nipzon Registered

    Joined:
    Dec 23, 2011
    Messages:
    247
    Likes Received:
    35
    Quote thx to Bitcoinwiki “A CPU core can execute 4 32-bit instructions per clock (using a 128-bit SSE instruction) or 8 via AVX (256-Bit), whereas a GPU like the Radeon HD 5970 can execute 3200 32-bit instructions per clock (using its 3200 ALUs or shaders). This is a difference of 800 (or 400 in case of AVX) times more instructions per clock. As of 2011, the fastest CPUs have up to 6, 8, or 12 cores and a somewhat higher frequency clock (2000-3000 MHz vs. 725 MHz for the Radeon HD 5970), but one HD5970 is still more than five times faster than four 12-core CPUs at 2.3GHz (which would also set you back about $4700 rather than $350 for the HD5970).”

    With so much more processing power available to blockchain is there a possibility that the same method of processing can be applied to gaming? Can a gaming code program be written for a GPU.
     
  2. Lazza

    Lazza Registered

    Joined:
    Oct 5, 2010
    Messages:
    12,345
    Likes Received:
    6,572
    You could google this and find lots of answers. I think the core thing you're overlooking is that GPUs have massive power when they're doing many discrete tasks. That's good for graphics, not so good for logic. That's why we have a CPU and a GPU - the CPU decides what's being done (especially with the OS in the background) and gets the GPU doing the graphical stuff it's better at.

    We don't have/use 512-core CPUs because we wouldn't make use of it for running an OS and programs (heck, not many people use more than 4 cores), it's better to have fewer cores running faster.

    *Here's a couple of good answers. The second one is quite clear.

    https://superuser.com/questions/308771/why-are-we-still-using-cpus-instead-of-gpus
     
  3. nipzon

    nipzon Registered

    Joined:
    Dec 23, 2011
    Messages:
    247
    Likes Received:
    35
    Thx for the link Lazza, great read and answers
     
  4. SPASKIS

    SPASKIS Registered

    Joined:
    Sep 7, 2011
    Messages:
    3,155
    Likes Received:
    1,426
    It should be noted that indeed GPU can be used in simulation. Many commercial simulation platforms still run in single core and CPU/GPU parallelization is an extra feature to buy.

    The GPU usage for solving the equations is a more recent feature which I am not sure how properly works since I have only seen it introduced recently in the platform I use. I only have the CPU parallelization module which works quite efficiently up to four cores.

    I will ask how efficient GPU usage it is in my platform and report back.

    In this link, for a given example it is stated that GPU beat CPU by 3-5 factor.
    https://www.edemsimulation.com/blog/simulate-faster-new-edem-gpu-solver/
     
    nipzon likes this.
  5. Lazza

    Lazza Registered

    Joined:
    Oct 5, 2010
    Messages:
    12,345
    Likes Received:
    6,572
    For gaming, which tends to involve graphics, the issue you'd quickly hit is you're using the GPU both for some simulation code and for graphics. I don't think that's going to work very well.

    If, hypothetically, you were able to write the bulk of your game code to run on a GPU, you'd need a GPU for that code and a GPU for the graphics stuff. And you'd probably call the first GPU a CPU if you redesigned the PC to suit it (or you'd make 2 GPUs a system requirement).

    So yes, you could simulate certain types of things with a GPU as it stands right now (generally large numbers of relatively simple objects) but you wouldn't do it for a game.

    In our context, I wonder (hypothetically again - I wouldn't expect or wait for such a thing) if ttool could be made to utilise the GPU in order to speed its operations when building tyres? No doubt it would require a massive rewrite to make it possible, but that would be a logical use of the GPU where the GPU isn't really being used for anything else at the same time.
     
  6. nipzon

    nipzon Registered

    Joined:
    Dec 23, 2011
    Messages:
    247
    Likes Received:
    35
    Great read guys, thx. From all of this it is clear that what seemed impossible a year ago is starting to see the light and who knows in the future we might see two processors per graphics card to solve the limitations of one GPU. Or the current GPU evolving to a point where the CPU is mainly used for OS.
     
  7. stonec

    stonec Registered

    Joined:
    Jun 19, 2012
    Messages:
    3,399
    Likes Received:
    1,488
    The limitation is not so much the GPU itself, more the way that code is written today. Most games, even optimized ones, rely on a couple of main threads running the game. GPU speed comes all from parallelism, as Nvidia cards for example have up to a couple of thousand of CUDA cores, but besides graphics, it's difficult to parallelize things like real-time physics engines. So as long as the main game is running on a couple of threads, you will always be dependent on a CPU that has the most power per single core available. GPU computing is not bringing any benefits unless you manage to split your code.
     

Share This Page