It's pretty remarkable that desktops and even laptops still sport bulky DIMMs and SSDs when these get the job done with better latency and lower power in a tiny fraction of the area. If Windows on ARM gains purchase or Apple shifts to arm as rumored with the MacBook Air in 2020, it should usher in significant form factor changes for many popular markets.
Ok, when you want to make something people can upgrade, you have to compromise. The connectors have to be robust enough to withstand many cycles, and that'll change things quite drastically.
If you integrate things you can make it lower power and sometimes even lower latency. But the mobiles are much higher latency anyway. Geekbench 4 memory scores show 40-60ns for desktop Intel platforms and Apple A12 gets 110ns.
If you want to cater to the upgrade market then such "bulky" DIMMs and SSDs will continue to exist.
It is interesting actually looking through some of the manuals to realize just how few remove/add cycles parts are rated for. CPU sockets are roughly around 15 generally. The serves most customers as there are few upgrades and only one performed if any. However, that is a relatively low number of cycles to support.
Platform latency is generally dependent upon the platform. See the difference between AMD and Intel desktop chips for example. Apple in mobile is doing very aggressive power management and factors like putting the memory bus to sleep and relaying on the DRAM's self refresh is a thing. These add latency vs. a desktop whose memory bus is expected to be always on.
The condensation of equipment makes sense in mobile but for desktops and servers, upgradbility is still important that the extra bulk is worth it.
Highly integrated SoCs are simply not in favor for desktop performance envelopes but very well could be if tuned for such purposes due to better performance and lower cost. There really will be performance improvements for accelerators like GPUs when SoC memory bandwidth improves and copying data over a bus to a separate pool of memory becomes moot for instance. The cost of replacing the entire SoC with ram and storage should be cheaper than more modular upgrades and give bigger aggregate performance benefits.
It's not just that. If you want upgradeability, you need standards. Standards take time to implement, whether its your own, or to make it work as an ecosystem. LGA1151 is a standard Intel decided to make for their chips. There are certain size requirements they have to meet.
It's true that absolute number of remove/add cycles may be low. But it still needs a degree of robustness not existing in an integrated part. Sockets also take up extra space. Ultrabooks don't have it because if you want the entire laptop, folded to be 15mm thin, then BGA is the only way to go.
Integrated solutions don't need to care about this. They can change depending on the need.
There's a great quote from Orwell's Animal Farm by Benjamin the Donkey that I think applies to protocols enabling modular upgrades: "God gave me a tail to keep off the flies. But I'd rather have had no tail and no flies." I think while general bus protocols will continue have a place in the future of computing hardware, both the excess overhead of needlessly general protocol traffic as well as the modular paradigm of design will impede performance for accelerators that can be integrated on an SoC that share a pool of very fast memory with the CPU. I'd rather the likes of Apple or Qualcomm give me a whole new SoC on a new cutting node every few months with upgrades to the entire assemblage than piecemeal assemble it together myself sacrificing both speed, power efficiency, and at higher cost.
As cost goes down and performance plateaus, upgrades that were common when prices were high and improvements were rapid at the socket level become moot. I had a past infatuation with upgrading individual parts in the old modular model which is waning in favor of more integrated upgrades as SoCs are becoming better at a much faster clip than higher power envelope parts, and I think will soon displace more traditional desktop layouts in much higher performance envelopes. Many off SoC accelerators will probably go the way that separate FPUs did in the 90's when TSMC's 5nm processes allows for ~15B transistors on a commercial SoC and ram stacking technologies eliminate this performance bottleneck from consumer systems. Memory absolutely doesn't have to be higher latency in theory as wire distances are simply much lower when it is wired to the package or connected by TSVs.
I'm happier to pay ~$700 a pop for a new slim PCB with an SoC at a leading edge process with everything you find in a typical PC built in rather than ~$1,000 for a CPU+Mobo+GPU with upgradeable DIMMs + and PCIe slots with approximately $500 of added cost per upgrade of CPU, RAM, or GPU with much unwanted bulk, heat, and bus interface distances. (You typically wouldn't find a cellular modem there either.)
Actually we're going in the opposite direction, with CPUs and GPUs integrating specialized accelerators. Take the RT and Tensor cores in Turing for example. Or what about the diverging of consumer and datacenter GPUs?
That's because while density gains are good, performance is nearly entirely limited by power, and power reduction using a new process is a fraction of what it once was.
HPC guys are predicting we'll soon see accelerators(includes CPUs) with 1KW of power!
Also achieving super high bandwidth in memory requires using solutions not practical in the consumer space. Because HBM continues to be prohibitely expensive and difficult to implement not just with packaging, but stacking the die to allow it to have such bandwidth.
That doesn't mean it will die. It'll surely exist but for premium systems. Watch as both laptops/pcs and phones reach new premium price points.
I think what you say about heterogeneity of architecture will continue to be true for the very high end which is often a large scale experimental hodge-podge, doesn't have strict power envelopes and located in specialized facilities with large budgets. I think the paradigm of integration at the lower end will begin to creep up into higher end in the coming years though.
As for power, if anything much more energy is now used to move around data than to do actual computations. A natural way to reduce this cost is tighter integration for shorter distances and fewer off die components. Methods using TSVs seem impractical today, but there are far more commercial examples of such in AMD GPUs and Broadcom Jericho switches than just a few years ago; related TSV techniques are also on the roadmaps of both Intel and TSMC in FOVEROS and WoW manufacturing techniques.
Even die bonded DRAM is superior to DIMMs for many practical form factors in wide use today and this seemed like an impractical and expensive technology just a few years back when packaging techniques were less sophisticated. For a growing subset of standard users, solutions like this will be more than good enough and be cheaper as well as more power efficient.
What you are fighting here is conservatism, and physics+reality are no match for conservatism. If I was able to change DIMMs in my system from 5 years ago, I should be able to do so forever!!!
Notice that no-one complains about not being able to make this change in contexts where they were never able to do so --- eg mobile phones, or HBM, or RAM on a GPU card... That's the clear tell that these complaints are based on conservatism, on an anger that the world is changing, not on any sort of rational analysis.
It's not only about upgrades, but also about flexibility of config. Imagine you want 9400F cpu with 8 GB of ram and 256 GB Evo970 disk. Now it can be built right at the local store. While for smartphone, each store should have a supply of exact configs, resulting in much less actual choice.
Perhaps, but if SoC cost is sufficiently low and specs sufficiently high, there might not need to be so many permutations and customers could be satisfied with just a few tiers instead of needing the granularity of many different permutations at the cost of more complex mobos with extra DIMM and bus slots that sit idle for most consumers and put resources further away from each other. The smartphone industry has demonstrated that this lack of granularity has been perfectly fine for many years, and configurability for its own sake is becoming a niche for hobbyists.
I would feel bad for devoted hobbyists having been one myself, but at the same time, we're getting lower costs, better form factors, efficiency, and performance. I think the relentless pace of fabs fueled by mobile SoC revenues, performance benefits of SoC integration, and open licensing of IP under a fabless model is pushing costs much lower and specs much higher to the point where a few tiers of inexpensive SoCs will satisfy the whole market by 2030 for even high end consumer desktop class needs.
Yeah, I can see these being worth the compromise of non-expandability in a thin laptop form factor. Especially two or four of them, even in higher priced professional form factors.
I somehow see the ARM 8cx based laptops making use of these first.
Are there use cases coming soon for mobile platforms to take advantage of 10-12GB of RAM? My current phone has 2GB and it isn't starved for memory despite haivng taken over a lot of tasks that I used to perform on a laptop or desktop including most of my gaming. My Windows laptop only has 8GB and it doesn't seem to be memory starved, though I will readily admit it does very little these days aside from two to four browser tabs and a Word doc, maybe an older game (Bay Trail with passive cooling so it's CPU, GPU, and thermally limited). The only situation in my personal usage model that really can smother a system is when I'm testing and with tinkering a fair number of virtualized systems. In that instance, I can easily fill 16GB in my Linux test laptop and I wish I had a 32GB for some breathing room for the host OS that's not somethign I can yet do on my phone although I would happily entertain the idea of doing so if multitasking improves and I can more easily use a keyboard, mouse, and external monitor with a mobile device via a fully wireless docking solution.
Adding such amounts of RAM to phones seems pointless. I have new phone with 6GB of RAM and it keeps reloading previously opened apps as often my old phone with 2GB did.
The goal of overkill configurations like these might ultimately be DeX like experiences with full desktop OS's from your mobile device. I think Microsoft ported Windows to ARM and is doing a phone of their own (running Android it seems but entirely possible that it can run an instance of Windows in a VM too...) and Apple is breaking legacy compatibility with OS X Catalina to achieve this unified ecosystem in the next few years.
I think it's there so that OS developers can push more crap onto the device. Same goes for UFS storage, and higher performance CPUs. After a point they want your device to be able to afford the wasted processing.
How does this make sense on performance/energy grounds? The whole trick with DRAM is to get it as close to the SoC as possible, with carefully tailored connections. This reduces your RC, meaning higher speed and lower power. But it means all those tricks like PoP packaging (the DRAM chip directly on top of the SoC, with tiny wires connecting the two).
This looks like something that's worse than soldered connectors, worse even, maybe, than socketed DRAM. Substantially slower and higher power. Who wants that in their phone?
Samsung always invests in innovation and UFS 3.0 is an advanced technology. Samsung will definitely introduce new things in its single-chip umcp. Can I share this useful link to https://mcafeeactivates.co.uk/
UFS 3.0 is a must in phones and increasing RAM will increase the smoothness of the phone. I have also pointed out some important things about RAM and UFS which you can see at https://gooff.info/
Samsung is a brand and always outrun its competitor by launching a new product and improving the chipset. Can I share this info on- https://openfiles.org/extensions/png/
Now its time for LPDDR 5 which is the latest and greatest. I hope in next-generation phone Samsung will surely introduce LPDDR 5 RAM. https://igeektechs.org/
As we know that the microsoft Compatibility telemetry is for the service of window 10 which helps to contain the technical data and rest of the relating software work. https://notresponding.us/microsoft-compatibility-t...
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
30 Comments
Back to Article
shabby - Thursday, October 24, 2019 - link
Do they ever announce the pricing of these kind of chips?drexnx - Thursday, October 24, 2019 - link
probably depends on quantity, configuration, how much other stuff you buy from Samsung LSI, how long of a delivery you'll accept, etc.Raqia - Thursday, October 24, 2019 - link
It's pretty remarkable that desktops and even laptops still sport bulky DIMMs and SSDs when these get the job done with better latency and lower power in a tiny fraction of the area. If Windows on ARM gains purchase or Apple shifts to arm as rumored with the MacBook Air in 2020, it should usher in significant form factor changes for many popular markets.IntelUser2000 - Thursday, October 24, 2019 - link
Ok, when you want to make something people can upgrade, you have to compromise. The connectors have to be robust enough to withstand many cycles, and that'll change things quite drastically.If you integrate things you can make it lower power and sometimes even lower latency. But the mobiles are much higher latency anyway. Geekbench 4 memory scores show 40-60ns for desktop Intel platforms and Apple A12 gets 110ns.
If you want to cater to the upgrade market then such "bulky" DIMMs and SSDs will continue to exist.
Kevin G - Thursday, October 24, 2019 - link
It is interesting actually looking through some of the manuals to realize just how few remove/add cycles parts are rated for. CPU sockets are roughly around 15 generally. The serves most customers as there are few upgrades and only one performed if any. However, that is a relatively low number of cycles to support.Platform latency is generally dependent upon the platform. See the difference between AMD and Intel desktop chips for example. Apple in mobile is doing very aggressive power management and factors like putting the memory bus to sleep and relaying on the DRAM's self refresh is a thing. These add latency vs. a desktop whose memory bus is expected to be always on.
The condensation of equipment makes sense in mobile but for desktops and servers, upgradbility is still important that the extra bulk is worth it.
Raqia - Thursday, October 24, 2019 - link
Highly integrated SoCs are simply not in favor for desktop performance envelopes but very well could be if tuned for such purposes due to better performance and lower cost. There really will be performance improvements for accelerators like GPUs when SoC memory bandwidth improves and copying data over a bus to a separate pool of memory becomes moot for instance. The cost of replacing the entire SoC with ram and storage should be cheaper than more modular upgrades and give bigger aggregate performance benefits.IntelUser2000 - Thursday, October 24, 2019 - link
It's not just that. If you want upgradeability, you need standards. Standards take time to implement, whether its your own, or to make it work as an ecosystem. LGA1151 is a standard Intel decided to make for their chips. There are certain size requirements they have to meet.It's true that absolute number of remove/add cycles may be low. But it still needs a degree of robustness not existing in an integrated part. Sockets also take up extra space. Ultrabooks don't have it because if you want the entire laptop, folded to be 15mm thin, then BGA is the only way to go.
Integrated solutions don't need to care about this. They can change depending on the need.
Raqia - Thursday, October 24, 2019 - link
There's a great quote from Orwell's Animal Farm by Benjamin the Donkey that I think applies to protocols enabling modular upgrades: "God gave me a tail to keep off the flies. But I'd rather have had no tail and no flies." I think while general bus protocols will continue have a place in the future of computing hardware, both the excess overhead of needlessly general protocol traffic as well as the modular paradigm of design will impede performance for accelerators that can be integrated on an SoC that share a pool of very fast memory with the CPU. I'd rather the likes of Apple or Qualcomm give me a whole new SoC on a new cutting node every few months with upgrades to the entire assemblage than piecemeal assemble it together myself sacrificing both speed, power efficiency, and at higher cost.name99 - Thursday, October 24, 2019 - link
Amen brotherRaqia - Thursday, October 24, 2019 - link
As cost goes down and performance plateaus, upgrades that were common when prices were high and improvements were rapid at the socket level become moot. I had a past infatuation with upgrading individual parts in the old modular model which is waning in favor of more integrated upgrades as SoCs are becoming better at a much faster clip than higher power envelope parts, and I think will soon displace more traditional desktop layouts in much higher performance envelopes. Many off SoC accelerators will probably go the way that separate FPUs did in the 90's when TSMC's 5nm processes allows for ~15B transistors on a commercial SoC and ram stacking technologies eliminate this performance bottleneck from consumer systems. Memory absolutely doesn't have to be higher latency in theory as wire distances are simply much lower when it is wired to the package or connected by TSVs.I'm happier to pay ~$700 a pop for a new slim PCB with an SoC at a leading edge process with everything you find in a typical PC built in rather than ~$1,000 for a CPU+Mobo+GPU with upgradeable DIMMs + and PCIe slots with approximately $500 of added cost per upgrade of CPU, RAM, or GPU with much unwanted bulk, heat, and bus interface distances. (You typically wouldn't find a cellular modem there either.)
IntelUser2000 - Thursday, October 24, 2019 - link
Actually we're going in the opposite direction, with CPUs and GPUs integrating specialized accelerators. Take the RT and Tensor cores in Turing for example. Or what about the diverging of consumer and datacenter GPUs?That's because while density gains are good, performance is nearly entirely limited by power, and power reduction using a new process is a fraction of what it once was.
HPC guys are predicting we'll soon see accelerators(includes CPUs) with 1KW of power!
Also achieving super high bandwidth in memory requires using solutions not practical in the consumer space. Because HBM continues to be prohibitely expensive and difficult to implement not just with packaging, but stacking the die to allow it to have such bandwidth.
That doesn't mean it will die. It'll surely exist but for premium systems. Watch as both laptops/pcs and phones reach new premium price points.
Raqia - Thursday, October 24, 2019 - link
I think what you say about heterogeneity of architecture will continue to be true for the very high end which is often a large scale experimental hodge-podge, doesn't have strict power envelopes and located in specialized facilities with large budgets. I think the paradigm of integration at the lower end will begin to creep up into higher end in the coming years though.As for power, if anything much more energy is now used to move around data than to do actual computations. A natural way to reduce this cost is tighter integration for shorter distances and fewer off die components. Methods using TSVs seem impractical today, but there are far more commercial examples of such in AMD GPUs and Broadcom Jericho switches than just a few years ago; related TSV techniques are also on the roadmaps of both Intel and TSMC in FOVEROS and WoW manufacturing techniques.
Even die bonded DRAM is superior to DIMMs for many practical form factors in wide use today and this seemed like an impractical and expensive technology just a few years back when packaging techniques were less sophisticated. For a growing subset of standard users, solutions like this will be more than good enough and be cheaper as well as more power efficient.
name99 - Thursday, October 24, 2019 - link
What you are fighting here is conservatism, and physics+reality are no match for conservatism.If I was able to change DIMMs in my system from 5 years ago, I should be able to do so forever!!!
Notice that no-one complains about not being able to make this change in contexts where they were never able to do so --- eg mobile phones, or HBM, or RAM on a GPU card... That's the clear tell that these complaints are based on conservatism, on an anger that the world is changing, not on any sort of rational analysis.
Bulat Ziganshin - Thursday, October 24, 2019 - link
It's not only about upgrades, but also about flexibility of config. Imagine you want 9400F cpu with 8 GB of ram and 256 GB Evo970 disk. Now it can be built right at the local store. While for smartphone, each store should have a supply of exact configs, resulting in much less actual choice.Raqia - Thursday, October 24, 2019 - link
Perhaps, but if SoC cost is sufficiently low and specs sufficiently high, there might not need to be so many permutations and customers could be satisfied with just a few tiers instead of needing the granularity of many different permutations at the cost of more complex mobos with extra DIMM and bus slots that sit idle for most consumers and put resources further away from each other. The smartphone industry has demonstrated that this lack of granularity has been perfectly fine for many years, and configurability for its own sake is becoming a niche for hobbyists.I would feel bad for devoted hobbyists having been one myself, but at the same time, we're getting lower costs, better form factors, efficiency, and performance. I think the relentless pace of fabs fueled by mobile SoC revenues, performance benefits of SoC integration, and open licensing of IP under a fabless model is pushing costs much lower and specs much higher to the point where a few tiers of inexpensive SoCs will satisfy the whole market by 2030 for even high end consumer desktop class needs.
psychobriggsy - Thursday, October 24, 2019 - link
Yeah, I can see these being worth the compromise of non-expandability in a thin laptop form factor. Especially two or four of them, even in higher priced professional form factors.I somehow see the ARM 8cx based laptops making use of these first.
drexnx - Thursday, October 24, 2019 - link
I think you have the device configurations flipped for 12GB vs. 10GBKishoreshack - Thursday, October 24, 2019 - link
Where is the 8 GB CONFIGURATION?Kishoreshack - Thursday, October 24, 2019 - link
Samsung Launches these modules&
unfortunately we never come to know which phones are using it
PeachNCream - Thursday, October 24, 2019 - link
Are there use cases coming soon for mobile platforms to take advantage of 10-12GB of RAM? My current phone has 2GB and it isn't starved for memory despite haivng taken over a lot of tasks that I used to perform on a laptop or desktop including most of my gaming. My Windows laptop only has 8GB and it doesn't seem to be memory starved, though I will readily admit it does very little these days aside from two to four browser tabs and a Word doc, maybe an older game (Bay Trail with passive cooling so it's CPU, GPU, and thermally limited). The only situation in my personal usage model that really can smother a system is when I'm testing and with tinkering a fair number of virtualized systems. In that instance, I can easily fill 16GB in my Linux test laptop and I wish I had a 32GB for some breathing room for the host OS that's not somethign I can yet do on my phone although I would happily entertain the idea of doing so if multitasking improves and I can more easily use a keyboard, mouse, and external monitor with a mobile device via a fully wireless docking solution.Infy2 - Thursday, October 24, 2019 - link
Adding such amounts of RAM to phones seems pointless. I have new phone with 6GB of RAM and it keeps reloading previously opened apps as often my old phone with 2GB did.Raqia - Thursday, October 24, 2019 - link
The goal of overkill configurations like these might ultimately be DeX like experiences with full desktop OS's from your mobile device. I think Microsoft ported Windows to ARM and is doing a phone of their own (running Android it seems but entirely possible that it can run an instance of Windows in a VM too...) and Apple is breaking legacy compatibility with OS X Catalina to achieve this unified ecosystem in the next few years.ads295 - Friday, October 25, 2019 - link
I think it's there so that OS developers can push more crap onto the device. Same goes for UFS storage, and higher performance CPUs. After a point they want your device to be able to afford the wasted processing.name99 - Thursday, October 24, 2019 - link
How does this make sense on performance/energy grounds?The whole trick with DRAM is to get it as close to the SoC as possible, with carefully tailored connections. This reduces your RC, meaning higher speed and lower power. But it means all those tricks like PoP packaging (the DRAM chip directly on top of the SoC, with tiny wires connecting the two).
This looks like something that's worse than soldered connectors, worse even, maybe, than socketed DRAM. Substantially slower and higher power. Who wants that in their phone?
Am I missing something?
scholztec - Friday, October 25, 2019 - link
+1 for the Animal Farm quoteJohnwick01 - Wednesday, November 20, 2019 - link
Samsung always invests in innovation and UFS 3.0 is an advanced technology. Samsung will definitely introduce new things in its single-chip umcp. Can I share this useful link to https://mcafeeactivates.co.uk/Officesetup01 - Wednesday, January 22, 2020 - link
UFS 3.0 is a must in phones and increasing RAM will increase the smoothness of the phone. I have also pointed out some important things about RAM and UFS which you can see at https://gooff.info/sophia421 - Monday, February 24, 2020 - link
Samsung is a brand and always outrun its competitor by launching a new product and improving the chipset. Can I share this info on- https://openfiles.org/extensions/png/jacksmartin109 - Tuesday, April 28, 2020 - link
Now its time for LPDDR 5 which is the latest and greatest. I hope in next-generation phone Samsung will surely introduce LPDDR 5 RAM.https://igeektechs.org/
openfiles01 - Friday, May 8, 2020 - link
As we know that the microsoft Compatibility telemetry is for the service of window 10 which helps to contain the technical data and rest of the relating software work. https://notresponding.us/microsoft-compatibility-t...