I discovered that the Zen 2 has a new surprising feature that we have never seen before: It can mirror the value of a memory operand inside the CPU so that it can be accessed with zero latency.
This assembly code shows an example:
Code: Select all
mov dword [rsi], eax
add dword [rsi], 5
mov ebx, dword [rsi]
It can even track an address on the stack while compensating for changes in the stack pointer across push, pop, call, and return instructions. This is useful in 32-bit mode where function parameters are pushed on the stack. A simple function can read its parameters without waiting for the values to be stored on the stack and read back again. This does not work if the stack pointer is modified by any other instructions or copied to a frame pointer. Therefore, it doesn't work with functions that set up a stack frame.
The mechanism works only under certain conditions. It must use general purpose registers, and the operand size must be 32 or 64 bits. The memory operand must use a pointer and optionally an index. It does not work with absolute or rip-relative addresses.
It seems that the CPU makes assumptions about whether memory operands have the same address before the addresses have been calculated. This may cause problems in case of pointer aliasing. If the second instruction in the above example has a different pointer register with the same value, you have a problem of pointer aliasing. The CPU assumes that the addresses are different so that the value of eax is directly forwarded to ebx without adding 5. It takes 40 clock cycles to undo the mistake and redo the correct calculation.
Yet, this is a pretty amazing feature. Imagine how complicated it is to implement this in hardware without adding any latency. I wonder why this feature is not mentioned in any AMD documents or promotion material. At least, I can't find any mentioning of this anywhere. AMD has something they call superforwarding, but this must be something else because it applies only to floating point registers.
Other interesting results for the Zen 2:
The vector execution units and data paths are all extended from 128 bits to 256 bits. A typical 256-bit AVX instruction is executed with a single micro-op, while the Zen 1 would split it into two 128-bit micro-ops. The throughput for 256-bit vector instructions is now as high as two floating point vector additions and two multiplications per clock cycle.
There is also an extra memory AGU so that it can do two 256-bit memory reads and one 256-bit write per clock cycle.
The maximum overall throughput for a mix of integer and vector instructions is five instructions or six micro-ops per clock for loops that fit into the micro-op cache. Long loops that don't fit into the micro-op cache are limited by a fetch rate of up to 16 bytes or four instructions per clock. Intel processors have a similar limitation, and this is a very likely bottleneck for CPU intensive code.
All caches are big, the clock frequency is high, and you can get up to 64 cores. All in all, this is a quite competitive CPU as long as your software does not utilize the AVX512 instruction set. The software market is generally slow to adopt to new instruction sets, so I guess it makes economic sense for AMD to lag behind Intel in the race for new instruction sets and longer vector registers.