Comments Locked

9 Comments

Back to Article

  • s.yu - Wednesday, December 18, 2019 - link

    Interesting new customer.
  • Drumsticks - Wednesday, December 18, 2019 - link

    I wish there was more information on this. 260 TOPs at what size? Presumably INT8, since its referenced as an inference chip, but INT4 isn't necessarily impossible. 260 TOPs at INT8 is still pretty impressive, but it's not mind bending if it's just INT4.
  • nandnandnand - Wednesday, December 18, 2019 - link

    This is 260 TOPS at 150 Watts on a 14nm node.

    Compared to 200 TOPS (INT8) likely at 65-70 Watts, on a 5nm or 7nm node for the Nvidia DRIVE AGX Orin.

    I'm going to assume INT8 but who knows.
  • Alistair - Wednesday, December 18, 2019 - link

    If it is INT8 isn't it faster than the chip that nVidia announced to come out 3 years from now?
  • nandnandnand - Wednesday, December 18, 2019 - link

    Sure, but it uses more energy.

    1.73 TOPS/Watt vs. at least 2.86 TOPS/Watt (according to AnandTech's guess for TDP)
  • Santoval - Wednesday, December 18, 2019 - link

    It also appears to be a strictly AI processor, with no general purpose cores, a GPU or anything else but tensor cores (and, of course, the memory controllers to speak with the HBM2 dies). That means that an additional general purpose CPU or SoC is required to control and program this. An extra CPU means higher cost and an even higher power draw.
  • Arnulf - Saturday, December 21, 2019 - link

    Sure, but it has HBM2 baked in and is actually being produced as we speak, on an older (less efficient) process.
  • Santoval - Wednesday, December 18, 2019 - link

    Almost certainly INT8.
  • Fataliity - Thursday, December 19, 2019 - link

    It is noteworthy that when the SoC was introduced back in mid-2018, its TDP was described at falling in at 100 Watts

    -- For some reason, most of these companies don't include the power of HBM and other stuff when calculating TDP. Only the Compute Chip itself..

Log in

Don't have an account? Sign up now