Understanding Connectivity: Some on the APU, External Chipset Optional

Users keeping tabs on the developments of CPUs will have seen the shift over the last ten years to moving the traditional ‘northbridge’ onto the main CPU die. The northbridge was typically the connectivity hub, allowing the CPU to communicate to the PCIe, DRAM and the Chipset (or Southbridge), and moving this onto the CPU silicon gave better latency, better power characteristics, and reduced the complexity of the motherboard, all for a little extra die area. Typically when we say ‘CPU’ in the context of a modern PC build, this is the image we have, with the CPU containing cores and possibly graphics (which AMD calls an APU).

Typically the CPU/APU has limited connectivity: video outputs (if an integrated GPU is present), a PCIe root complex for the main PCIe lanes, and an additional connectivity pathway to the chipset to enable additional input/output functionality. The chipset uses a one-to-many philosophy, whereby the total bandwidth between the CPU and Chipset may be lower than the total bandwidth of all the functionality coming out of the chipset. Using FIFO buffers, this is typically managed as required. The best analogy for this is that a motorway is not 50 million lanes wide, because not all cars use it at the same time. You only need a few lanes to cater for all but the busiest circumstances.

If the CPU also has the chipset/southbridge built in, either in the silicon or as a multi-chip package, we typically call this an ‘SoC’, or system on chip, as the one unit has all the connectivity needed to fully enable its use. Add on some slots, some power delivery and firmware, then away you go.

Bristol Ridge’s ‘SoC’ Configuration

What AMD is doing with Bristol Ridge is a half-way house between a SoC and having a fully external chipset. Some of the connectivity, such as SATA ports, PCIe storage, or PCIe lanes beyond the standard GPU lanes, is built into the processor. These fall under the features of the processor, and for the current launch is a fixed set of features. The CPU also has additional connectivity to an optional chipset which can provide more features, however the use of the chipset is optional.

Here’s a block diagram to help explain:

On the APU we have two channels of DDR4, supporting two DIMMs per channel. For the major PCIe devices, we have a PCIe 3.0 x8 port, and this does not support bifurcation (or splitting) to any x4, x2 or x1 combination. It’s a solitary x8 lane suitable for a PCIe x8 port (we’ll discuss what else can be done with this later). The APU communicates with the optional chipset with a PCIe 3.0 x4 link, and we’ve confirmed with AMD that this is a simple PCIe interface. The other parts of the APU give four USB 3.0 ports, two SATA 6 Gbps ports, and two PCIe 3.0 x1 ports. These ports also support NVMe, and can provide two PCIe 3.0 x1 storage ports or can be combined for a single PCIe 3.0 x2.

It Looks Like an x16

Now, if you look at the layout, try counting up how many PCIe lanes are split across all the features. We’ve seen a USB 3.0 hub support four ports of USB 3.0 from a single lane of PCIe 3.0 before, and there are plenty of controllers out there that split a PCIe 3.0 x1 into two SATA ports. So play the adding game: x8 + x4 + x1 + x1 + x1 + x1 = x16. The Bristol Ridge APU seems to suggest it actually has sixteen PCIe 3.0 lanes, but AMD has decided to forcibly split some of them using internal hubs and controllers.

It’s an interesting tactic because it means that systems can be built without a discrete chipset, or the four chipset lanes can be used for other features. However it negates a full PCIe 3.0 x16 link for a full-bandwidth PCIe co-processor. Bearing in mind that if there was a PCIe 3.0 x16 link, there are no additional lanes for a chipset, so there would not be any IO such as SATA ports anyway, such that there would be no physical storage.

The x16 total theory is also somewhat backed up by the lack of bifurcation on the x8 link. Historically a PCIe root complex in a consumer platform that supports x16 can be bifurcated down to x8/x4/x4, and anything else requires additional PCIe switches to support more than three devices. It would seem that AMD has taken the final x4 link and added an on-die PCIe switch to provide those ports, for standard PCIe to USB/SATA controllers. I would hazard a guess and say that what AMD has done is more integrated and complicated than this, in order to keep die area low.

PCIe is Fun with Switches: PLX, Thunderbolt, 10GigE, the Kitchen Sink

Another thing about the x8 link is that it can be combined with an external PCIe switch. In my discussions with AMD, they suggested a switch that bifurcates the x8 to dual x4 interfaces, which could leverage fast PCIe storage while maintaining the onboard graphics for any GPU duties. There’s the other side, in using an x8 to x32 PCIe switch and affording two large x16 links. However, large GPU CrossFire is not one of the main aims for the platform.

Here’s a crazy mockup I thought of, using a $100 PCIe switch. I doubt this would come to market.


Ian plays a crazy game of PCIe Lego

The joy of PCIe and switches is that it becomes a mix and match game - there’s also the PCIe 3.0 x4 to the chipset. This can be used for non-chipset duties, such as anything that takes PCIe 3.0 x4 like a fast SSD, or potentially Thunderbolt 3. We discussed TB3 support, via Intel’s Alpine Ridge controller, and we were told that the AM4 platform is currently being validated for systems supporting AMD XConnect, which will require Thunderbolt support. AMD did state that they are not willing to speculate on TB3 use, and from my perspective this is because the external GPU feature is what AMD is counting on as being the primary draw for TB3 enabled systems (particularly for OEMs). I suspect the traditional motherboard manufacturers will offer wilder designs, and ASRock likes to throw some spaghetti at the wall, to see what sticks.

The Integrated GPU The Two Main Chipsets: B350 and A320
Comments Locked

122 Comments

View All Comments

  • Danvelopment - Tuesday, September 27, 2016 - link

    Your processor alone is almost $200. You can buy a motherboard, chassis, 80+ psu (what Dell uses in their optiplex's), (well exclude the aftermarket cooler, extra fans and optical for fun sake), and 4gb ram for $83 including ship(200-157+40)? I'm impressed. Care to spec that up?

    Plus the price of the parts you were going to add to the $200 machine.

    And your choice in bench appears to be severely lacking in benchmarks but I see there aren't many ivy i5s, can get ex lease ivy i7s for about $30 more.
  • jardows2 - Tuesday, September 27, 2016 - link

    I had hoped for more numbers in Bench, but I guess the i3's don't get the same attention here. I didn't really want to link to "rival" review sites here in the comments. Main point was that the Skylake i3's are not that dramatically slower than Ivy i5's.

    i3-6100 is pricing around $120 USD, $110 on sale. Asus B150M-A/M.2 is about $80, but I live close to a MicroCenter, so their combo deal knocks $30 off that price. Crucial MX300 M.2 for $70, 8GB of DDR4 for $35, 1TB hard drive for raw storage at $45, Case/PS for $65. Use my own license for OS. That comes up to ~$415-$445 for a brand new computer.

    Main point, I can get a new computer for not much more than a used computer, once I bring the used computer up to my specification. Having a new computer over a used computer for me is more important than the performance difference of the i5.
  • Danvelopment - Tuesday, September 27, 2016 - link

    It's not really building a new machine if you're reusing old parts, if you're talking the general populace rather than you personally (my target) they won't have the option of moving their Windows license, and there's a clock drop on the i3-6100 relative to the benchmarks earlier.

    Also a chassis/PSU for $65 doesn't sound like a very good option. Going back to Dell (my old company was Dell heavy so I have a lot of experience with their enterprise lines, HP, Lenovo etc will probably be the same). The Optiplex chassis' were almost entirely toolless, well cooled and the 790 onwards looked decent, albeit not incredible (but a $65 chassis/PSU wouldn't). On top of that they used 80+ PSUs (the 3000/7000/9000 series used Gold, I can't remember if the older ones were the same), proven, quality units. i5/i7 builds also used Q series motherboards with Intel components (such as the NIC).

    If you're matching quality like for like then you'd be looking to spend more on the new machine. I'd much rather personally run a secondhand Ivy i5/i7 using quality components. Their consumer lines are garbage from my experience but ex-lease machines are all enterprise devices.

    Being able to do something doesn't make it a better option, especially if you drop the quality to do so. Ivy i5, even to the benchmarks above is more powerful, ex-lease component machines are higher quality and even with the above it's still cheaper. It just makes sense.
  • Danvelopment - Tuesday, September 27, 2016 - link

    I don't work there anymore but I liked the Dell enterprise machines so much that I actually bought an (ex-lease) E7240 after I left. i5-4200U, 4GB RAM (I added another 8 that I had lying around), 256GB OEM SSD for $200. I can flip the back off with two screws and access almost everything. And the screen front bezel just pulls off with fingernails, although you wouldn't know it til you tried. You don't have to unbolt the hinges like most laptops.

    Before I started there they bought Vostros (laptop and desktop) for some reason, rather than the enterprise machines and fark those things. They were the hardest farking things to work on, they literally went out of their way to make it hard. I phased the final ones out just before I left. It was the Vostro 3450 that was my most reviled computer ever. The hard drive was screwed onto the motherboard and you literally had to pull the whole thing apart, lift the motherboard then unscrew the HDD from it. If you took the back panel off, you could have done it from there but they put a small band of plastic on the bottom chassis to prevent it. It literally had no other purpose. If there was no warranty you could take a knife, cut that plastic off and do it directly.

    Look at this joke of a thing:
    http://www.laptopultra.com/guide/wp-content/upload...
    https://i.ytimg.com/vi/6QwZ71iAdLA/maxresdefault.j...
  • 4fifties - Friday, September 23, 2016 - link

    If DIY motherboards, which presumably would allow either Bristol Ridge or Summit Ridge, follow the pattern of this OEM board, aren't we consigning Zen to just eight lanes of PCIe 3.0 for discrete graphics? Not necessarily an extinction-level event, but neither is it something gaming enthusiasts will be happy with. Hopefully, motherboard manufacturers won't drop the ball with this.
  • prtskg - Friday, September 23, 2016 - link

    I think both Summit ridge and Raven ridge will have better chipset(enthusiast level).
  • KAlmquist - Saturday, September 24, 2016 - link

    I take it you are thinking that the AM4 socket has more than 12 PCIe lanes, but that Bristol Ridge doesn't connect them all (sort of like the Intel i7-6800K has 28 PCIe lanes even though it uses a socket that has 40 lanes). That makes sense.

    My guess is that motherboard manufacturers expect AM4 motherboard sales to be driven primarily by Zen. In the DIY market, even the people who do buy a Bristol Ridge processor may be doing it with the intention of upgrading to a more powerful processor later. So I would expect most motherboard manufacturers would try to do a good job of supporting the Zen-based processors.
  • MrCommunistGen - Friday, September 23, 2016 - link

    It really looks like the connectivity onboard the APU is targeted at what a normal laptop would need. This should be a major design advantage for AMD compared to their previous mobile platforms in terms of power, design & material cost, and platform footprint.

    - This class of CPU doesn't warrant a x16 PEG Link
    - Due to space constraints most non-DTR laptops will have fewer than 4x USB ports - maybe 3+1 USB-based card reader. They can probably use an onboard hub for more if they really need them.
    - x4 PCI-E 3.0 M.2 is an option

    In fact, other than USB ports, this is probably enough connectivity for most non-enthusiast desktop users as well. This could help BOM and board design costs here as well. The optimistic part of me would love to see that reinvested elsewhere in the system. Realistically I see that split between a lower sticker price and an increase in margins for the system builder.
  • stardude82 - Friday, September 23, 2016 - link

    Almost certainly AMD is just reusing the Carrizo design as a cost cutting measure. There isn't a AMD CPU on the market which a x8 link would bottleneck first.
  • Samus - Friday, September 23, 2016 - link

    Nice to see AMD trumping Intel's Crystalwell GPU for half the cost...

Log in

Don't have an account? Sign up now