AMD's CTO Mark Papermaster just put up this slide that shows its HSA (Heterogeneous Systems Architecture) roadmap through 2014. This year we got Graphics Core Next, but next year we'll see a unified address space that both AMD CPUs and GPUs can access (today CPUs and GPUs mostly store separate copies of data in separate memory spaces). In 2014 AMD plans to deliver HSA compatible GPUs that allow for true heterogeneous computing where workloads will run, seamlessly, on both CPUs and GPUs in parallel. The latter is something we've been waiting on for years now but AMD seems committed to delivering it in a major way in just two years. 

Comments Locked

9 Comments

View All Comments

  • philbotimus - Thursday, February 2, 2012 - link

    ... was ahead of it's time.
  • tipoo - Thursday, February 2, 2012 - link

    This seems obvious for cost reduction, but it remains to be seen if the unified memory wont constrain performance in some way. For low end integrated chips sure, but for what they are going for with truly hybridized chips its going to require gobs of bandwidth and low latencies.
  • MrSpadge - Thursday, February 2, 2012 - link

    Unified logically doesn't have to mean unified physically. Think of NUMA with heterogenous nodes.
  • tipoo - Thursday, February 2, 2012 - link

    Hmm I see what you mean. That would be interesting.
  • Morg. - Monday, February 6, 2012 - link

    Well I believe main memory systems have been lagging behind for a while now, it would make sense to see DDR5 on the CPU's, especially when moving to APU's and UPU's and shared memory space.

    In the end, the GPU will end up inside of the CPU anyway, like all PU's before, and as that drives up the need for non-crappy-memory-bandwidth, fat chance that main memory systems will come closer to what's available for GPU's today - probably in a tiered way such as :

    CPU - L0 - L1 - L2 - L3 - DDR5/6 - DDR3/4 - NAND - HDD
  • Zoomer - Saturday, February 11, 2012 - link

    Most apps won't see a huge difference; caching mechanisms hide latency and bw constraints very well. And GDDR5 is very power hungry.
  • Beenthere - Thursday, February 2, 2012 - link

    Being able to use all available CPU/GPU performance at any time for any needs is likely to improve system performance. This is AMD moving forward with better ideas and use of all available computing power.
  • Azethoth - Thursday, February 2, 2012 - link

    It seems like going forward the big trend is more and more software implemented in the CPU/gpu and the differences between those two disappear as well.

    Gpu led the way by implementing graphics this way. Now intel adds encoding / decoding on chip. Peripheral control and networking crept on board as well. Sensors and wifi are probably not too far behind.

    What else makes the on board grade? speech recognition? Maybe high level programming patterns like event handling?

    I think multi core just doesn't cut it outside servers and some lucky algorithms that can leverage it.
  • R3MF - Friday, February 3, 2012 - link

    2012 - GPU can access CPU memory / 2013 Unified memory for CPU & GPU - which will deliver the holy grail of limitless memory for GPU compute in 3D rendering, i.e. using system memory while rendering 3D scenes on the graphics card via OpenCL?

Log in

Don't have an account? Sign up now