Software optimization resources | E-mail subscription to this blog | www.agner.org
Threaded View | Search | List | List Messageboards | Help |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2016-06-26 02:48 |
The discussion in the thread Proposal for an ideal extensible instruction set has been very fruitful and led to a lot of new ideas. I don't know where this will lead us, but the project looks so promising that it is worth pursuing further.
I have put the project on Github to facilitate collective development of the instruction set, software toolchain, system standards, and hardware implementations. The address is github.com/ForwardCom. The development of hardware should be done in collaboration with the Opencores forum. The name CRISC was taken, so I have changed the name to ForwardCom. It stands for "forward compatible computer system". The latest version of the manual is at github.com/ForwardCom/manual or www.agner.org/optimize/#instructionset. I have also reserved the domain name www.forwardcom.info. The ForwardCom project includes both a new instruction set architecture and the corresponding ecosystem of software standards, application binary interface (ABI), memory management, development tools, library formats and system functions. Here are some highlights:
|
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2016-07-04 01:49 |
Hi Agner – Can you recap how you resolved the objection that Hubert had to unified int/floating point registers? He had argued for the benefits of dedicated FP registers, but I can't find the resolution (the thread became quite long). Other questions: 2. How do you situate your ISA in the context of the current trend toward heterogeneous computing, GPGPU, AMD's HSA, OpenCL, OpenMP, OpenACC, FPGAs, etc.? Will the new ISA help, hinder, or be neutral to heterogeneous programming? 3. This is an interesting time to develop a new ISA because of the oft-claimed end of Moore's Law scaling. How do the anticipated physics of the next 10 – 20 years of microprocessor design impact ForwardCom? Is the physical substrate irrelevant? (I'm thinking of III-V semiconductors, mostly) Or is ForwardCom shaped by present-day silicon and fab assumptions? It seems like some aspects of an ISA must be affected by the underlying physics and engineering constraints. For example, whether unified registers can be efficiently implemented, or whether orthogonal instructions can be efficiently implemented, and definitely how variable-length vector registers will perform. 4. Is ForwardCom extensible to a unified RAM (volatile) and non-volatile memory space? There are a few proposals out now, including Moneta D, that rethink the file system and make calling solid state storage a lot like calling a memory address (or just the same thing). They mostly hinge on faster-than-SSD media, like Intel and Micron's new 3D XPoint, phase-change memory, etc. but you could probably do it with fast SSDs. I think it would be interesting if applications could set a few custom constants at start time. A while back I asked about having useful constants hardwired in the CPU (e.g. π), and you guys explained why it probably wouldn't be worth it. I think letting each application tell the CPU to save a few constants that the application will often use might be more useful. The point is to be able to invoke the constant without an immediate, but just with a much smaller shortcode (two or three bits). It's kind of like an application-specific cache (an idea which Intel has implemented a bit), but much, much smaller. The programmer might not even need to know or designate the constants – it could be empirically determined by the compiler. This would only help certain kinds of applications, but that's true of many things – it would be a way to reduce code size but more importantly to have a sticky, stable cache immune from misses and eviction. An expanded version of this would be to be able to define and save specific, short code sequences. (I see your Chapter 6, but it seems like a placeholder for now.) |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2016-07-04 05:47 |
Joe Duarte wrote:
Hi Agner – Can you recap how you resolved the objection that Hubert had to unified int/floating point registers?Huberts arguments are here. My reason for using the same registers for floating point scalars and floating point vectors is that floating point code often contains calls to mathematical function libraries. Such a library can use variable-length vectors for parameters and results. The same library function can be called with a scalar or a vector because a scalar is simply handled as a vector with one element. This will make it easy for the compiler to vectorize the code. ForwardCom has a separate register set for integer scalars. These are useful for pointers, loop counters, array indexes, etc. that don't need vectors, and they are easier to save and restore than vector registers. How do you situate your ISA in the context of the current trend toward heterogeneous computing, GPGPU, AMD's HSA, OpenCL, OpenMP, OpenACC, FPGAs, etc.? Will the new ISA help, hinder, or be neutral to heterogeneous programming?People are sometimes doing non-graphic calculations in a GPU. This is complicated and awkward and has many problems with data transfer and synchronization. ForwardCom will eliminate this need. You may have a system with two sets of ForwardCom processors, one for graphics and one for everything else, but in most cases you will just have a system with several identical all-purpose ForwardCom processors. This is an interesting time to develop a new ISA because of the oft-claimed end of Moore's Law scaling. How do the anticipated physics of the next 10 – 20 years of microprocessor design impact ForwardCom? Is the physical substrate irrelevant?The physics affects the clock frequency and the maximum vector length that can implemented efficiently. Some kind of layered or 3-D chip could possibly help making very long vectors more attractive, because it reduces physical distances on the chip. Is ForwardCom extensible to a unified RAM (volatile) and non-volatile memory space? There are a few proposals out now, including Moneta D, that rethink the file system and make calling solid state storage a lot like calling a memory address (or just the same thing). They mostly hinge on faster-than-SSD media.Yes. The memory management system of ForwardCom divides a program's memory into two blocks: (1) Code and constants, (2) read/write data including static data, stack and heap. Block (1) is addressed relative to the instruction pointer, while block (2) is addressed relative to a special register called data section pointer (DATAP) and the stack pointer. Nothing in one block is addressed relative to something in the other block. Each of the two blocks can be placed at any address independently of the other block. We can easily place block (1) in non-volatile memory if only it has fast read access. It doesn't need fast write access. Block (2) should be placed in volatile RAM. Microcontrollers with Harvard architecture are like this anyway. I think it would be interesting if applications could set a few custom constants at start time. A while back I asked about having useful constants hardwired in the CPU (e.g. π), and you guys explained why it probably wouldn't be worth it. I think letting each application tell the CPU to save a few constants that the application will often use might be more useful. The point is to be able to invoke the constant without an immediate, but just with a much smaller shortcode (two or three bits). It's kind of like an application-specific cacheForwardCom can have constants embedded in instructions. This includes integer constants of all sizes and floating point constants of half, single and optionally double precision (requires 1, 2 and 3 word instruction size, respectively). Your idea would require an extra register set for saving and restoring data that will be needed later. This extra register set would just require two extra instructions for copying data from a normal register to an extra register and vice versa. This could be useful for many purposes and it would reduce the pressure on the data cache. The only problem I can see is that it will be an extra complication to the function calling convention because we need to know whether a function modifies the extra registers or not. An expanded version of this would be to be able to define and save specific, short code sequences. (I see your Chapter 6, but it seems like a placeholder for now.)That would be more complicated. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Hubert Lamontagne | Date: 2016-07-06 09:51 |
A couple questions: - What's the exact algo for determining if an OP contains a memory load? - What's the exact algo for determining if an OP contains a memory store? Generally you need to know this as soon as possible in the pipeline, because a lot of out-of-order CPUs do the memory ops in-order well before the other stuff. And even when memory ops are reordered, they need a separate reordering to preserve the proper store sequence. (note: this must include ALL cases) Bonus question: - What's the exact algo for determining if an OP is integer or FPU/vector? Same as above, but this is to figure out which ops go into the integer reg-rename + queue + ALUs + retirement, and which ones go to the FPU/vector side. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2016-07-06 11:05 |
Hubert Lamontagne wrote:
What's the exact algo for determining if an OP contains a memory load?Thanks for the questions. This is important for efficient decoding. The format is determined by the instruction length (IL) and mode bits, see table 3.14 in the manual. The following formats have a memory operand: 0.4 - 0.9, 2.0, 2.2, 2.4.x, 2.8.x (see table 3.14) and two tiny instructions (see table 4.6 and 4.7). The only mandatory instructions that have memory store are the store instruction (see table 4.5) with the same formats as above, and two tiny instructions (table 4.6 and 4.7). Other instructions with memory store are optional, listed in table 4.10. What's the exact algo for determining if an OP is integer or FPU/vector?General purpose registers:
|
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Hubert Lamontagne | Date: 2016-07-07 01:45 |
Some questions about your memory model: - For every Load/Store, how is the CPU going to differentiate which memory map entry it falls into? If you have 16 memory map entries, does it compare every load/store address to all 16 block start addresses? (which requires 16 comparators in hardware if you want to calculate it in a single cycle?! or multiple cycles of latency and 4~5 comparators?!?) - If I understand your memory allocation scheme correctly, every program's heap size is determined beforehand, and when it grows beyond that size, you absolutely have to use up one more memory map entry? Wouldn't that use up memory map entries really fast and go through hundreds of memory map entries for complex applications like a web browser? And then force the copy of hundreds of megabytes of data when something like a video game runs out of memory map entries? - Wouldn't you at least require memory blocks to be aligned to L1 cache way size? (usually 4k~32k on modern CPUs?)... Or at very least, wouldn't it have to be aligned to L1 cache line size? (32~64 bytes typically?)... The problem with non-cache-aligned memory blocks (and more specifically addends) is that it forces you to add up this memory offset after calculating memory address but before reading it from the cache, since you no longer have a 1:1 relationship between cache lines and virtual addresses. Forcing alignment lets you do this offsetting *in parallel* with the cache access, which saves at least 1 full cycle of cache latency on, like, everything. The other option is to use a virtually-addressed cache, but this has the downside that it introduces aliasing problems (and you have to flush the cache when the mapping changes). |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2016-07-07 02:56 |
Hubert Lamontagne wrote:
Some questions about your memory model:Both methods are allowed, or some hybrid. Comparators are not that expensive so we can probably afford to have 16 comparators. It takes several clock cycles to fetch a memory operand from cache or write a memory operand to cache, so it is also possible to use a binary tree algorithm with 2-3 comparisons per clock cycle while fetching the operand speculatively at the same time. Most memory accesses will use the same one or two map entries as previous accesses so we can save power by starting each search where the last search ended. If I understand your memory allocation scheme correctly, every program's heap size is determined beforehand, and when it grows beyond that size, you absolutely have to use up one more memory map entry? Wouldn't that use up memory map entries really fast and go through hundreds of memory map entries for complex applications like a web browser?This is a weak spot indeed. I am imagining a scheme where the heap is extended with memory blocks that grow exponentially in size. If a 1 MB heap is exhausted you allocate 2 or 4 MB more. If they get exhausted you allocate 16 MB more, etc. In addition to this, we should do everything to predict the heap size. As a last resort, move data if the memory becomes too fragmented (this will cost a delay, of course, but less that the delay of swapping memory to disk). With these methods, we should still be able to keep the memory map so small that it is more efficient than a TLB with fixed page sizes. The maximum size of the memory map is implementation dependent. Wouldn't you at least require memory blocks to be aligned to L1 cache way size? (usually 4k~32k on modern CPUs?)... Or at very least, wouldn't it have to be aligned to L1 cache line size? (32~64 bytes typically?)In most cases, yes. It depends on how the cache is designed. I have just specified 8 bytes as a minimum that all implementations must obey, including small systems with no cache. Smaller memory blocks may be used for special purposes such as shared memory blocks and protected sandboxes for secure input buffers. Maybe I should indicate more clearly the the 8 bytes is a minimum, not a requirement. |
Reply To This Message |
Whole-function vectorization and conditionals |
---|
Author: | Date: 2016-08-15 04:00 |
Agner wrote:My reason for using the same registers for floating point scalars and floating point vectors is that floating point code often contains calls to mathematical function libraries. Such a library can use variable-length vectors for parameters and results. The same library function can be called with a scalar or a vector because a scalar is simply handled as a vector with one element. This will make it easy for the compiler to vectorize the code. ForwardCom has a separate register set for integer scalars. These are useful for pointers, loop counters, array indexes, etc. that don't need vectors, and they are easier to save and restore than vector registers.This looks quite elegant. The vector length is passed implicitly to the caller without any change to the ABI. How do you handle conditionals, though? For instance: for (int i = 0; i < N; i++) { if(a[i] != 0) b[i] = sin(a[i]); }Does the ABI define a register as the "current mask" (GPU-like)? Or should the caller compact input vectors to have only contiguous elements, and expand the output vectors (vector processor-like)? |
Reply To This Message |
Whole-function vectorization and conditionals |
---|
Author: Agner | Date: 2016-08-15 06:17 |
Sylvain Collange wrote:
Does the ABI define a register as the "current mask" (GPU-like)? Or should the caller compact input vectors to have only contiguous elements, and expand the output vectors (vector processor-like)?Vector register r1-r7 can be used as masks. The normal solution for your example would be to calculate the sin function on the whole a vector and then merge the result, using a mask for (a[i] != 0). If you want to mask off some elements throughout the calculation of the sin function then the library function would need an extra parameter for the mask. (Some Intel math libraries actually have this). You wouldn't save any time by masking off elements during the calculation. The only reason I can see to have a mask on library functions is to avoid interrupts (traps) if floating point exceptions are enabled. The sin function wouldn't generate exceptions, but the logarithm function would in your case. The recommendation is to detect floating point errors as NAN's and INF's rather than using exceptions, because exceptions depend on the vector length. If you replace sin by log in your example then the log function would generate INF or NAN for non-positive values, and these values will be removed by the mask operation after the function call. If you insist on having floating point exceptions turned on, for whatever reason, then you might want a version of the log function with a mask or you could change the non-positive values before the call to the log function. Compacting the vector, as you propose, might be useful if the vector is very sparse (only few of the elements are used) and you could compact several vectors into one before calling the function. However, compacting vector elements takes time so this would be useful only for very time-consuming functions. |
Reply To This Message |
Whole-function vectorization and conditionals |
---|
Author: | Date: 2016-08-15 07:35 |
Thanks. I was considering the more general case of vectorizing arbitrary functions, including functions with side effects. There is no reason to restrict function-level vectorization to a few hand-written math functions, is it? As the compiler does not know in general whether a particular function has side effects or can raise exceptions, my point is that the mask could be part of the ABI, just as vector length (implicitly) is. This would enable vectorization-agnostic composition of function calls. By the way, have you considered the micro-architecture and compiler implications of using a subset of vector registers as masks? It seems to introduce quite a lot of complexity: - Instructions take up 5 input operands from vector registers (RS, RT, RU, Mask, RD). The old register value RD is required to merge the result. - The dependency with the old destination value and the merging operation could be avoided when unused lanes are set to 0 instead of the old value. Unfortunately, that information is encoded in the contents of the mask register itself, so the hardware does not know it early enough. - More generally, masks encode control flow information, while vector registers encode data flow information. They are not needed at the same stages in the pipeline. Execution logic does not need mask (except as an optimization for saving power), and control logic does not need data. Encoding control flow information and data flow information in the same registers makes decoupling these stages much harder. - From the compiler perspective, having some instructions that can only access a subset of registers makes register allocation much harder. |
Reply To This Message |
Whole-function vectorization and conditionals |
---|
Author: Agner | Date: 2016-08-15 10:53 |
Sylvain Collange wrote:
my point is that the mask could be part of the ABI, just as vector length (implicitly) is. This would enable vectorization-agnostic composition of function calls.A function may have an unlimited number of input parameters and an unlimited number of outputs (through pointers or references or class members or whatever, all of which could in principle have different masks. That would add a lot of complexity to the ABI for something that is rarely needed. I prefer to specify a mask as an explicit function parameter in the rare cases that it is needed. By the way, have you considered the micro-architecture and compiler implications of using a subset of vector registers as masks?A lot of instructions in current instruction sets are using certain registers for specific purposes. I don't think this is a big problem. It is a much bigger problem to have a separate set of registers for masks only, as AVX512 has, in my opinion. Instructions take up 5 input operands from vector registers (RS, RT, RU, Mask, RD). The old register value RD is required to merge the result.No, the first source operand is required to merge the result when masking. This happens to be the same as RD only if using an instruction format with too few register operands. This gives a maximum of 5 input dependencies. These are not required at the same stage in the pipeline. The execution stage has typically 2 or 3 input operands. A few unimportant optional instructions have 4 inputs. I added these instructions mainly for testing whether it is feasible to have four input operands. The address calculation stage in the pipeline can have up to three input dependencies: base pointer, index and vector length (or block size). The architecture does not specify which stage in the pipeline reads the mask register. It could be somewhere between the address calculation stage and the execution stage if the number of register read ports is critical. So, the total maximum number of input dependencies is five, but the maximum number of input dependencies at one stage in the pipeline can be limited to three if necessary. This is still a lot of complexity to handle for the scheduler, I agree. I have limited the number of output dependencies to one for the same reason. This limitation creates problems for add-with-carry and for integer overflow detection, but people have convinced me that it was more important to limit the number of output dependencies. x86 has two output dependencies on most instructions, including the flags and the floating point control/status registers. Intel processors have traditionally had a limitation of two input dependencies for each micro-operation, but this has been increased to three in recent versions. AMD have never had any limitation on input dependencies. I don't know how expensive it is to have many read ports, but I prefer to have many read ports rather than splitting instructions into micro-operations. My idea is that a ForwardCom processor should not need to split instructions into micro-ops. making the operation dependent on the former value of the architectural destination register defeats the purpose of register renaming: write-after-read dependencies are still there.All superscalar processors handle this by renaming the register so that the input and output use two different physical registers. A more acceptable implementation would break down each masked instruction into 2 µops: first a computation on the whole vector, then a merge (I believe that is how Knights Landing and Skylake handle masked AVX-512 instructions).I have not been able to get my hands on any AVX-512 processor yet, so I am not able to test it. If you have any information on the microarchitecture of these processors then please give me a link. More generally, masks encode control flow information, while vector registers encode data flow information.The masking mechanism translates control flow into data flow. It does not change the dependency of the other input and output operands, as I explained above. If all elements of a vector are masked off then the hardware will still treat the input operand as a dependency and wait for it. You may want to convert this into a branch if the prediction rate is good. |
Reply To This Message |
Whole-function vectorization and conditionals |
---|
Author: | Date: 2016-08-15 14:05 |
Agner wrote:A function may have an unlimited number of input parameters and an unlimited number of outputs (through pointers or references or class members or whatever, all of which could in principle have different masks. That would add a lot of complexity to the ABI for something that is rarely needed. I prefer to specify a mask as an explicit function parameter in the rare cases that it is needed.I do not see in which situation a vectorized program would need to pass multiple masks to a function. If we vectorize a loop with a function call inside, for each iteration i, either the function is called (bit mask 1) or not called (bit mask 0). Masks originate from the control flow of the original program. There is only one control flow regardless of the number of function parameters. So a single mask register is sufficient.
So, the total maximum number of input dependencies is five, but the maximum number of input dependencies at one stage in the pipeline can be limited to three if necessary.The pipeline stage does not make a difference. If you want to sustain an execution rate of three 5-input instructions per clock cycle, you need the bandwidth to read 15 inputs per cycle on average from the register file. It does not matter in which order and at which pipeline stage they are read. I agree that limiting the number of outputs is even more important. (Register file area scales with the square of the number of ports and power scales with its cube. But for read ports, you can always follow the brute-force approach of duplicating the register file and get a linear scaling.) But input ports are also a bottleneck. Specializing registers allow to partition this complexity. Most ISAs use different registers for integer and FP, for instance.
Let's take an example:making the operation dependent on the former value of the architectural destination register defeats the purpose of register renaming: write-after-read dependencies are still there.All superscalar processors handle this by renaming the register so that the input and output use two different physical registers. Non-masked code mul_add.d v0, v1, v3, v4 mul_add.d v0, v5, v6, v7Renaming maps each v0 operand to different physical registers. Assuming all inputs are ready and we have 2 FMA units with latency of 5 cycles, both instructions run in parallel and the code takes 5 cycles. So far so good. Masked code mul_add.d v0, v1, v3, v4, mask=v8 mul_add.d v0, v5, v6, v7, mask=v9Renaming still maps v0 to different physical registers p0 and p1, but the second FMA now has a data dependency on the older value of v0, which is p0. Assuming a by-the-book OoO scheduler, instructions will be considered as dependent and the result will be only available after 10 cycles. Now with the 2 µop option, the code becomes after expansion and renaming: mul_add.d p0, v1, v3, v4 merge.d p1, v0, p0, mask=v8 mul_add.d p2, v5, v6, v7 merge.d p3, p1, p2, mask=v9Both FMAs can start immediately and produce their results after 5 cycles. Then the first merge executes in say, 1 cycle, then the second dependent merge. Total latency is 7 cycles. In any case, renaming was unable to completely reorder the two instructions. I agree you could have a very smart scheduler that schedules instructions just 5 cycles before the mask becomes available so it is ready just in time. But it would add a lot of complexity in a critical part of the pipeline. Better break down the work into enough simple µops and let the out-of-order engine do the heavy lifting. Then you can spend resources you just saved into wider issue width and higher clocks.
I have not been able to get my hands on any AVX-512 processor yet, so I am not able to test it. If you have any information on the microarchitecture of these processors then please give me a link.At some point Intel had the Knights Landing software optimization guide on their website, but it seems they took it out. :( About masks, they say: AVX-512 permits instructions to use masking. The best performance usually happens for instructions that do not use masking. Instructions that mask with zeroing usually have similar performance. Instructions that mask with merging can occasionally perform much slower, since there is an additional dependency introduced to the instruction. Keep in mind performance considerations when selecting the masking mode.Their formulation suggests they follow the single-instruction approach with naive scheduling. They do not appear to split in two µops after all. But if they write "much slower" they probably mean it... The takeaway is that the masking mode (zeroing or merging) should be available as early as possible in the pipeline, ideally at decode just by looking at the instruction. And the programmer/compiler should use zeroing whenever possible. |
Reply To This Message |
Whole-function vectorization and conditionals |
---|
Author: Agner | Date: 2016-08-15 23:46 |
Sylvain Collange wrote:
Agner wrote:Could you have two separate sets of regiser read ports, one that is wired to the address calculation stage in the pipeline (general purpose registers only), and one that is wired to the execution stage? Â Yes, x86 instructions also output flags, but x86 CPUs use tricks to turn them into single-output instructions by having slightly wider physical registers, storing the flags in the physical destination register, and renaming the architectural flags register to the physical destination of the last instruction issued.Interesting. I wonder where you have all this information from?
Masked codeNo. Without zeroing, the mask chooses between the result and the first input operand, which is not the destination operand.mul_add.d v0, v1, v3, v4, mask=v8 mul_add.d v0, v5, v6, v7, mask=v9Renaming still maps v0 to different physical registers p0 and p1, but the second FMA now has a data dependency on the older value of v0, which is p0. mul_add.d v0, v1, v3, v4, mask=v5is the equivalent of: for (i=0; i < vectorlength; i++) v0[i] = v5[i] ? v1[i] + v3[i]*v4[i] : v1[i];
You may as well interpret "much slower" as a consequence of the extra dependency which it has to wait for. I actually had the masking mode (zeroing or merging) coded in a bit in the instruction in an early draft of ForwardCom, but I needed this bit for other purposes and I was able to put it into the mask register because it doesn't change dependencies when merging with the first operand rather than with the destination.AVX-512 permits instructions to use masking. The best performance usually happens for instructions that do not use masking. Instructions that mask with zeroing usually have similar performance. Instructions that mask with merging can occasionally perform much slower, since there is an additional dependency introduced to the instruction. Keep in mind performance considerations when selecting the masking mode.Their formulation suggests they follow the single-instruction approach with naive scheduling. They do not appear to split in two µops after all. But if they write "much slower" they probably mean it... |
Reply To This Message |
Number of input dependencies |
---|
Author: Agner | Date: 2016-08-16 03:11 |
Actually, we could limit the maximum number of input dependencies to four if we make the rule that instructions with three input operands including a memory operand cannot have a mask. That would not be a serious limitation. |
Reply To This Message |
Whole-function vectorization and conditionals |
---|
Author: Sylvain Collange | Date: 2016-08-16 10:30 |
Correction: I meant WAW dependencies in my previous messages, not WAR dependencies. Agner wrote: Could you have two separate sets of regiser read ports, one that is wired to the address calculation stage in the pipeline (general purpose registers only), and one that is wired to the execution stage?If both sets of ports access the same physical array of registers, then the RF component still needs m+n ports, regardless of where its ports are connected to. If you replicate the array itself to make two copies of the register file, then yes, it works: you get twice as many read ports for about twice the area. It does not work for write ports, though: you still need to perform all writebacks to both register files to keep their data coherent. Then another complication is the bypass network, which does not scale well with the number of ports either. You may want to take advantage of the fact that mask registers are a subset of general-purpose registers to replicate only part of the array. It would work on an in-order processor, but as soon as you rename registers this property does not hold any more.
Interesting. I wonder where you have all this information from?Unfortunately, academic papers seldom go down to that level of detail, so you have to microbenchmark, speculate, and read patents. For register renaming, here are the classic ones from the P6/K7 era: https://www.google.com/patents/US6047369 https://www.google.com/patents/US5632023 Even on these 20 year-old architectures, you can see the renaming techniques are quite involved. (Thanks to the infamous INC/DEC instructions that only update a subset of the Flags register...)
No. Without zeroing, the mask chooses between the result and the first input operand, which is not the destination operand.Now I get it, thanks! Indeed, you need no extra input since you will be reading this register anyway. I was expecting the semantics of masked instructions to be "pretend nothing happens on lanes with a 0 bit mask". In your ISA even 0-masked lanes have the side effect of a register move. This is unusual, but why not... Wouldn't that complicate register allocation, though? Do you have code examples showing how a compiler can perform if-conversion from some arbitrary code? Also are there instruction encodings to chose between the result and the second operand (e.g. v0 = v5 ? v1 + v3 * v4 : v3; for a conditional Horner step) ? |
Reply To This Message |
Whole-function vectorization and conditionals |
---|
Author: Agner | Date: 2016-08-17 07:33 |
Sylvain Collange wrote:
Correction: I meant WAW dependencies in my previous messages, not WAR dependencies.I don't want to duplicate the register file, only avoid the quadratic increase you are talking about.
Then another complication is the bypass network, which does not scale well with the number of ports either.Can you bypass a result to multiple in-flight instructions using the same bus? If so, you need only few bypass buses. Do you have any references on how the register read ports are designed and why they are so expensive?Interesting. I wonder where you have all this information from?Unfortunately, academic papers seldom go down to that level of detail, so you have to microbenchmark, speculate, and read patents. Even on these 20 year-old architectures, you can see the renaming techniques are quite involved. (Thanks to the infamous INC/DEC instructions that only update a subset of the Flags register...)This is exactly the kind of cumbersome heritage that made me want to design a completely new instruction set. ForwardCom never modifies a part of a register only. I was expecting the semantics of masked instructions to be "pretend nothing happens on lanes with a 0 bit mask".That was never my plan. The mask value is not available at the time the register renamer assigns the output. Do you have code examples showing how a compiler can perform if-conversion from some arbitrary code?A branch in a vectorized loop is translated to a mask. All the good compilers can do this today. Also are there instruction encodings to chose between the result and the second operand (e.g. v0 = v5 ? v1 + v3 * v4 : v3; for a conditional Horner step) ?That was not my plan, but of course such an instruction could be defined if there is a need for it. But why would you have a conditional inside a polynomial? |
Reply To This Message |
Merging with first operand |
---|
Author: Sylvain Collange | Date: 2016-08-18 10:20 |
Agner wrote:Can you bypass a result to multiple in-flight instructions using the same bus? If so, you need only few bypass buses.Yes, that is essentially how it is done (or at least how it is described in textbooks). But the load on the bus increases with the number of potential consumers, and so do area, delay and power. There are still a lot of tricks hardware designers can use on a given technology, but the general trends hold anyway. And as far as instruction set design is concerned, general trends are what we care about.
Do you have any references on how the register read ports are designed and why they are so expensive?Rixner et al., Register Organization for Media Processing, HPCA 2000. www.ai.mit.edu/projects/aries/papers/imagine/register.pdf evaluates the tradeoffs using analytical models and considers various distributed and SIMD RF organizations.
I was asking specifically in the context of ForwardCom and the choice of the first operand as the masked source.Do you have code examples showing how a compiler can perform if-conversion from some arbitrary code?A branch in a vectorized loop is translated to a mask. All the good compilers can do this today. e.g. Which code is generated when you vectorize : for all i { int x = a[i]; int y = b[i]; if(x > y) { y = x + 1; } c[i] = y; d[i] = x; }The translation in a classic predicated ISA is straighforward, but what about in ForwardCom? I have the impression that the first operand of the masked instructions will end up being the same as the destination anyway, and that we need an extra register move. That was not my plan, but of course such an instruction could be defined if there is a need for it. But why would you have a conditional inside a polynomial?I do not know, but the question is what a compiler will emit when vectorizing such a program if the instruction does not exist? (And the compiler has to assume that arithmetic exceptions might be enabled.) |
Reply To This Message |
Merging with first operand |
---|
Author: Agner | Date: 2016-08-19 12:00 |
Sylvain Collange wrote:
Rixner et al., Register Organization for Media Processing, HPCA 2000.Thanks for the reference
Which code is generated when you vectorize :This is straightforward: A compare instruction to generate the mask, an addition instruction, and a masked move instruction to conditionally replace y by x+1. (And the compiler has to assume that arithmetic exceptions might be enabled.)If you have enabled exceptions for integer overflow and you want to avoid the hypothetical overflow exception in a not-taken branch then you have to put the mask on the addition instuction as well. In other words, you will have two masked instructions using the same mask. This is the reason why exceptions are not good in a vector processor. In floating point code you can disable exceptions and rely on the propagation of NAN and INF values to detect numeric errors. But there is no easy way of detecting integer overflow. I have discussed this in a previous post, but there seems to be no good solution. Many compilers and programming languages simply have no easy way of detecting integer overflow, but this may be a security problem, so I thought that we must provide some method. The following methods for detecting integer overflow may be considered:
|
Reply To This Message |
SIMD exceptions are fine with masking |
---|
Author: | Date: 2016-08-19 15:37 |
Agner wrote:The following methods for detecting integer overflow may be considered:What about: 5. Same as 3, and use "traditional" masking on all SIMD instructions. In traditional masking, an SIMD instruction working on vectors of n operands takes an n-bit mask. For each lane i: If bit i is set, the operation is performed normally on lane i (including triggering exceptions as needed). If bit i is unset, nothing happens on lane i (including silencing exceptions). If multiple exceptions happen in the same vector, you may either report them all at once by exposing an exception mask, or report them one by one in a well-defined order. AVX-512 as well as AMD and Nvidia GPUs work that way, and so do most SIMD machines since the ILLIAC-IV, 45 years ago. The only exceptions were the MMX/SSE/AVX-style short-vector instruction sets, which were not "real" SIMD (at least not until AVX-512). There is no problem with exceptions in SIMD code, as long as per-lane masking is properly defined. There is some overhead for handling such masking in an out-of-order processor as we discussed, but zeroing can avoid this overhead in many cases. |
Reply To This Message |
SIMD exceptions are fine with masking |
---|
Author: Agner | Date: 2016-08-20 00:21 |
Thank you for your inventiveness.
What about:I consider this as implied in (3) because the mask in ForwardCom suppresses all error conditions for disabled elements. Maybe I didn't express this clearly enough. It is still a problem that the behavior depends on the vector length. If a loop generates two exceptions then these exceptions may happen simultaneously if vectors are long, but separately if the same code is run on another processor with shorter vectors. It is difficult to construct the exception handler so that it is guaranteed to generate the same result regardless of the vector length of the microprocessor.
I can think of a couple more options:Then you have a problem if you want to propagate the overflow bit. You could either check the overflow after each instruction at the cost of N branch instruction in a calculation with N steps; Use OR instructions to join the overflow flags at the cost of N OR-instructions; or have an extra input for the previous flag on each overflow check instruction at the cost of having more input dependencies on the instructions and another dependency chain making out-of-order execution more difficult. Anyway, the throughput will be 1/3 of the throughput without overflow checking, where method (4) gives 1/2 throughput. Method 6 : Have separate vector flags registers. This is like method 1 but instead of having the extra inputs and outputs go to the same register file (which requires more ports and thus more micro-ops), they go to a separate register file that only deals with flags which means that they can have their own register renamer and have their own ports, which avoids the N^2 growth associated with adding more ports.This still requires an extra input dependency and an extra output dependency on each instruction. I think this would be quite costly for the out-of-order scheduler. Method 7 : Do it all in software. This makes checked operations take multiple instructions which might be slower. But it reduces the overall complexity of the design which means that you might be able to run 1 or 2 more instructions per cycle, which probably offsets the extra cost of separate checking. This can probably be combined with option 5 (just have a couple extra instructions for speeding up overflow checking for the cases that really need it).I am not sure I understand how differs from your method 5. Very rough cost-benefit analysis:Yes, this shows that all the methods are costly for the hardware as well as for the compiler. I don't want to split instructions into multiple micro-operations.
|
Reply To This Message |
SIMD exceptions are fine with masking |
---|
Author: Hubert Lamontagne | Date: 2016-08-20 18:49 |
Agner wrote:This is why the add instruction triggers a fault on... IBM Z-system I *THINK*. MIPS has a separate specialized instruction for this as well. It's the same reason for 0-divide exceptions, it's used in applications where aborting failing applications is a good thing in general (like server software talking to a database where the earlier you shut it down, the less damage you inflict...). This is not too popular among end-user facing applications like video games and stuff like photoshop, because in that case an application shutdown is already the worst thing that can happen. Agner wrote: If I'm not mistaken, that's how LISP did it. The catch with this one is that it requires dynamic typing to work. If you're running a language with dynamic typing, the chances that you're going to be able to run any vector code at all are low. Even for scalar code, dynamic typed languages typically have some other much worse bottlenecks than integer computation (multiple levels of indirection to get to data in Python, for instance). For something like C++ code, IMHO it's just way easier and faster to supersize all your integers to 64bits from the start than to deal with any sort of size change. Agner wrote: This happens in one very precise case: BIGNUM computation. How important is BIGNUM for you? My impression on this is that BIGNUM falls kinda low on priority lists for architectures like ARM. If you design your architecture for BIGNUM, you can probably make it much more efficient for this kind of stuff than a generic architecture like MIPS. I guess it comes down to what you're targetting. Agner wrote: My experience on that is that if you're running into signed overflow problems, the easiest way is simply to switch to floating point or supersize your integers to 64bits. |
Reply To This Message |
SIMD exceptions are fine with masking |
---|
Author: Sylvain Collange | Date: 2016-08-25 07:43 |
Agner wrote:I consider this as implied in (3) because the mask in ForwardCom suppresses all error conditions for disabled elements. Maybe I didn't express this clearly enough. It is still a problem that the behavior depends on the vector length. If a loop generates two exceptions then these exceptions may happen simultaneously if vectors are long, but separately if the same code is run on another processor with shorter vectors. It is difficult to construct the exception handler so that it is guaranteed to generate the same result regardless of the vector length of the microprocessor.I see. I agree that there are scenarios in which it is important to signal exceptions in the same order as they would occur in the original sequential program. e.g. when speculatively vectorizing a while loop with a data-dependent exit condition. There are still solutions that are not /that/ difficult: Option 1. In the exception handler, find out which is the first lane that produced an exception, named i (I assume the ISA gives this information somehow). Option 2. Same approach, using hardware instead of software. The "pending exception" becomes a flag register, with 1 flag per lane. The current vector length or mask is automatically reduced as some lanes record exceptions. A new 'commit' instruction is called at the end of each iteration. It raises the recorded exception that would be the first in sequential order. In practice, I do not expect this to be often necessary, as a speculative vectorizer will need to restore a snapshot to undo all unwanted side effects and restart the loop whenever an exception occurs anyway, regardless on which lane it occurs. |
Reply To This Message |
Merging with first operand |
---|
Author: Hubert Lamontagne | Date: 2016-08-19 17:52 |
Agner wrote:If you have enabled exceptions for integer overflow and you want to avoid the hypothetical overflow exception in a not-taken branch then you have to put the mask on the addition instuction as well. In other words, you will have two masked instructions using the same mask.I can think of a couple more options: Method 5 : Have an extra instruction that applies the operation but generates potential-overflow-or-exception flags instead of result values. This is equivalent to option 1 but using 2 instructions instead of one (but is roughly the same cost in the pipeline since you need 2 micro-ops in both case to store all the results), or it's like option 4 but without having the two results interlaced together. Method 6 : Have separate vector flags registers. This is like method 1 but instead of having the extra inputs and outputs go to the same register file (which requires more ports and thus more micro-ops), they go to a separate register file that only deals with flags which means that they can have their own register renamer and have their own ports, which avoids the N^2 growth associated with adding more ports. Plus, you can mandate that the whole flags register is always updated to avoid x86 flags register merging insanity. And you can probably have a limit of 1 op per cycle on the number of flags operations to limit the size of the extra flags manipulation block you'd be adding to the CPU (see how ARM did this: most arithmetic ops never touch the flags so it needs much less aggressive handling of flags). If you're doing heavy duty flags register operations, this might be worth the trouble - which is why 8bit and 16bit CPUs all have flags (they need it to deal with oversized integers). Method 7 : Do it all in software. This makes checked operations take multiple instructions which might be slower. But it reduces the overall complexity of the design which means that you might be able to run 1 or 2 more instructions per cycle, which probably offsets the extra cost of separate checking. This can probably be combined with option 5 (just have a couple extra instructions for speeding up overflow checking for the cases that really need it). Very rough cost-benefit analysis: |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2016-08-17 23:46 |
Agner wrote:
Joe Duarte wrote:I think it would be interesting if applications could set a few custom constants at start time. A while back I asked about having useful constants hardwired in the CPU (e.g. π), and you guys explained why it probably wouldn't be worth it. I think letting each application tell the CPU to save a few constants that the application will often use might be more useful. The point is to be able to invoke the constant without an immediate, but just with a much smaller shortcode (two or three bits). It's kind of like an application-specific cache Agner, by the way, I just learned that my idea for small cache for constants (preferably in the form of special registers, or at least an L1 speed cache) has been widely implemented for years in Nvidia GPUs. It's called "constant memory", and reserves 64 KiB. See: http://cuda-programming.blogspot.com/2013/01/what-is-constant-memory-in-cuda.html http://stackoverflow.com/questions/18020647/cuda-constant-memory-best-practices I was thinking of something smaller, like 256 bits in a reserved register for data of any type, and something like 2 or 4 KiB of stable application-specific L1-equivalent cache. The right figures should be empirically determined by careful research. In fact, I don't think ISAs should settle on 32-bit and 64-bit types and registers unless those turn out to be optimal sizes, which I think is unlikely. As long as we can take care of alignment issues, I would determine the right type and register sizes empirically, based on what we know about common workloads and applications, as well as foreseeable future workloads. I think it's likely that 20-, 40-, and 80-bit types and registers will be more performant than 32/64. (I have no idea what the optimal vector register sizes would be, but Iike 320-bit. 64-bit is such a waste when used for memory addresses and pointers. On mobile and desktop, we should be fine with a 1 terabyte address space, which 40 bits give us.) |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2016-08-18 09:01 |
Joe Duarte wrote:
Agner, by the way, I just learned that my idea for small cache for constants (preferably in the form of special registers, or at least an L1 speed cache) has been widely implemented for years in Nvidia GPUs. It's called "constant memory", and reserves 64 KiB.ForwardCom can have read-only memory areas, but they are not guraranteed to be cached. I assume that you want a small memory area that is guaranteed to always be cached. That might actually give a speed gain since memory access is the most important bottleneck in many applications. The Itanium did not have success with "explicit parallelism" because it was too difficult to handle for the compiler. Maybe "explicit caching" will be easier to handle. You might want to specify explicit caching for the top of the stack in the innermost loop or for often-used constants, jump tables or global variables. There is a problem with task switching, though. The entire contents of the on-chip extra cache has to be swapped to main memory on each task switch. If you want fast task switching, you might want to have multiple cache banks with one for each process. That would be cumbersome. Another possibility would be to reserve the extra cache for a particularly critical process. This process would monopolize the extra cache in one or more CPU cores while less important processes would be prevented from accessing it. That's an extra complication for the operating system to keep track of, but it is a quite common scenario that one process is speed-critical while all other processes are unimportant background processes. Another possibility, which is perhaps simpler to handle, is an extra set of registers which are used for extra storage. Assume, for example, that we supplement the vector registers v0 - v31 by an extra set of vector registers w0 - w31. The extra registers can perhaps do nothing but read and write, e.g. w8 = v5; v2 = w8;These extra registers can be used for often-used constants, for global variables, and for temporarily storing a normal register rather than saving it on the stack. With an extra set of registers you have the complication of calling conventions. ForwardCom has a mechanism for telling the compiler which registers are modified by e.g. a library function. This mechanism could be extended to the extra registers. A library function could save a callee-save register to an extra register rather than saving it on the stack and reserve this extra register by indicating in the object file which extra registers it is using. The extra registers still have to be saved at every task switch if they are used.
In fact, I don't think ISAs should settle on 32-bit and 64-bit types and registers unless those turn out to be optimal sizes, which I think is unlikely. As long as we can take care of alignment issues, I would determine the right type and register sizes empirically, based on what we know about common workloads and applications, as well as foreseeable future workloads.The alignment problem is serious. The hardware would have to support misaligned memory accesses as well as vectors of odd-size elements, which would be quite expensive for the hardware design and for performance. There is also a standardization issue. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2016-08-31 06:56 |
Agner wrote:The Itanium did not have success with "explicit parallelism" because it was too difficult to handle for the compiler. Maybe "explicit caching" will be easier to handle. You might want to specify explicit caching for the top of the stack in the innermost loop or for often-used constants, jump tables or global variables. I like your idea of the extra registers that do nothing but read/write. The key of what I want is stability and perhaps determinism in being able to cache application-specific data, constants, masks, etc. The CPU cache right now is a jungle.
|
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2016-08-31 11:08 |
Joe Duarte wrote:For example, maybe some instructions are only allowed at the start of a process. After a certain milestone, some instruction codes switch to represent different instructions – instructions that cannot be used at the start of a process, only after the milestone.The Opcode numbers are not so crowded that this would be worth the extra complexity, but some code spaces are reserved for application-specific or user-defined instructions.
I wonder if we could use a powers-of-two type, such that the literal binary value would represent an exponent of two, e.g. 10001101 would mean 2^141. Something like that would help with encoding some large numbers, though powers of two might not be the right form.ForwardCom includes formats that code constants as a << b = a · 2b, but only for immediate constants, not for variables. It also supports half precision floating point constants - but not variables.
Relatedly, instead of hard-coding a single constant like π the way I was thinking several months ago, there could be a type that represented a set of constants – maybe four or five bits to encode 16 to 32 predesignated constants that were empirically determined to recur frequently enough in a wide range of applications to be worth encoding.If we have an extra set of registers, these could be used for constants that are reused in a program. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2016-09-01 12:16 |
Joe Duarte wrote:And I wonder about simpler, more specialized cores. Right now you've got all these powerful AVX, AVX2, BMI, FMA, TSX instructions and all the legacy x86 and 386 instructions supported on every single core. Maybe it would make sense to have a couple of Turbo cores or something that were SIMD-only, or optimized for heavy compute or HPC workloads. There's been some research on how to design a processor tailored for OpenCL, for example. Maybe ForwardCom could have a simpler, faster subset of the ISA designed for highly parallelized applications using OpenMP, OpenCL, or something like Threaded Building Blocks. It would be like Big/Little, but more specifically aimed at high performance parallel code in the little cores. A new ISA doesn't have the backwards compatibility constraints Intel is limited by, where they throw the kitchen sink into every core.Indeed, the 1st-gen Xeon Phi (a.k.a. Knights Corner) was basically a 64-bit extended P54C core (pre-MMX Pentium) with a custom (sorta like AVX-512F ) 512-bit Vector ISA. And it was actually marketed as a non MMX/SSE/AVX compatible product. As the current gen Xeon Phi (Kngiths Landing) has adopted AVX-512 but also the whole MMX/SSE/AVX sets, an "AVX-512 only" core would surely be concievable and even desirable on a hybrid desing (like the Big/Small scheme). But I guess Intel wouldn't like to maket another non-backwards-compatible CPU. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2016-07-12 03:35 |
It is probably not difficult to create a new x86 version that is user mode compatible with most modern programs but lacks things like segmentation (including the GDT/LDT) and real mode. New OS versions would be required, but most modern user mode programs would work with few if any modifications. Anyone want to do a proposal for that? |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Hubert Lamontagne | Date: 2016-07-12 09:11 |
Yuhong Bao wrote:It is probably not difficult to create a new x86 version that is user mode compatible with most modern programs but lacks things like segmentation (including the GDT/LDT) and real mode. New OS versions would be required, but most modern user mode programs would work with few if any modifications. Anyone want to do a proposal for that?Afaik, this is one of the things designed into UEFI (a pathway to removing 16bit/real mode and segmentation by booting straight into 32bit or 64bit mode). |
Reply To This Message |
Things from MIPS (and novel things) |
---|
Author: Anonymous | Date: 2016-07-28 07:10 |
Agner, have you studied the MIPS architecture? What do you think of "branch delay slots" ? Personally, I like them, because of the following: There are things the software developer (and the compiler) easily knows, but the hardware developer will hardly know it. The instruction set should give the software developer means to express that knowledge to the hardware. One thing software developers know is which procedure will be called on a function pointer (because of polymorphism, indirect branches have become quite popular), but the hardware couldn't possible know it. When indirect branch instructions like "CALL (register)", "JR (register)" or "JALR (register)" are executed, the hardware is absolutely clueless as to where to move forward, because it needs to know the value of the register before proceding (which it likely won't know. Because of the modern pipelined approach, it needs the write-back value of the register). This will very likely lead to a stall. But, with the instructions "SETCALL (register)" and "CALL (no arguments)", "SETJUMP (register)" and "JUMP (no arguments)", it's possible to execute instructions while the processor is fetching instructions from the target address, like this: SETCALL (register) ; sets the call target, the processors starts to fetch data from it. The delay slot gives the processor time to prepare for the call or jump, and relieves the branch predictor of work. The branch predictor might be completely removed from the processor with this approach (more silicon for the hardware developer!). I have never seen any processor with instructions like that (and I have studied the 68000, x86, x64, ARM and MIPS). "if" statements could also benefit from this approach. Delay slots are not novel. The novel part here is its size can be arbitrarily set. I like the statements on the section "10.8 Function libraries and link methods". I wonder if they are there because of the indirect branch problem. Cache miss is also a big topic. Suppose I have five functions (functions in the mathematical sense), A, B, C, D, E, which can be fully executed in parallel. The calls instructions are laid in the following order: CALL A; CALL B; CALL C; CALL D; CALL E; But only the data for the E function is ready (is on the cache). Would the processor stall, while waiting for the data of A, B, C, D arrive? And, even worse, wait for the completion of A, B, C and D? I like that vectors are present on this new instruction set and are an important part of it. Many algorithms use matrix multiplication, which can greatly benefit from instruction-level parallelism, but the current instructions sets are simply disappointing. I'm currently trying to write a fast 2048-bit modular multiplier (for exponentiation with 2048-bit exponents, which should happen under tens of milliseconds, and currently is taking hundreds of milliseconds), and, although Toom-Cook algorithm uses matrix multiplication, it's simply not possible to express it nicely with the instructions given by x86 or x64. |
Reply To This Message |
Things from MIPS (and novel things) |
---|
Author: Agner | Date: 2016-07-28 12:16 |
Anonymous wrote:
What do you think of "branch delay slots" ?I would categorize this under the label of explicit parallelism. You are explicitly specifying what the processor should do while fetching a jump target. I think there is a problem with letting the compiler decide what to do in parallel rather than leaving this decision to the CPU. The compiler will have to know which things the processor can or cannot do in parallel and how many instructions it can execute in the time it takes to fetch the jump target. That will lock you into a particular hardware implementation. If you later develop a new processor with different timings or a different degree of parallelism then you cannot run optimally on the new processor unless you compile again for this particular processor. We have seen this problem with the Itanium where instructions were put together in blocks of three. The decision of which three instuctions to execute in parallel was left to the compiler rather than the CPU. It was very difficult to make such a compiler, and you were pretty much locked into a fixed degree of parallelism. I think it is better to have a genuine superscalar design with full out-of-order capabilities. Then you can run the same binary code on different CPUs with different amounts of parallelism, and the compiler doesn't need to know the details of the CPU. One thing software developers know is which procedure will be called on a function pointer (because of polymorphism, indirect branches have become quite popular), but the hardware couldn't possible know it.In a typical application, the value of the function pointer will be available long before the values of the function parameters so that the call instruction can be executed out of order. The CPU will execute the call instruction and fetch and decode the instructions inside the function while it is waiting for the function parameters. Cache miss is also a big topic. Suppose I have five functions (functions in the mathematical sense), A, B, C, D, E, which can be fully executed in parallel. The calls instructions are laid in the following order:No. Assuming that there are no code cache misses, but only data cache misses, a superscalar CPU will fecth and decode everything in advance, if the out-of-order buffer is big enough, while waiting for data. If the data for E are available before the data for the preceding functions, then E will be executed first. I like that vectors are present on this new instruction set and are an important part of it. Many algorithms use matrix multiplication, which can greatly benefit from instruction-level parallelism, but the current instructions sets are simply disappointing.Matrix multiplication often involves a lot of data permutation and horizontal addition which takes extra time. |
Reply To This Message |
Things from MIPS (and novel things) |
---|
Author: Hubert Lamontagne | Date: 2016-07-28 13:11 |
Anonymous wrote:What do you think of "branch delay slots" ?My opinion on branch delay slots is that they're great for a certain size of CPUs: large enough to be 32 bits and maybe have a code cache (so that it's not limited by instruction memory bandwidth), but small enough that it doesn't have a branch predictor. MIPS is a perfect example of this, and tons of MIPS have been used in stuff like Playstations where they're the ideal size. Not to mention the whole "fast microcontroller" market - which is right now dominated by ARMs (which don't have a branch delay slot but would be the right kind of CPU for it). Afaik, Nvidia's Denver architecture (in-order VLIW, very modern) has branch delay slots. Once your CPU is large enough to have a branch predictor, the speed benefit shrinks a lot. And when your CPU becomes out-of-order, then the branch delay slot becomes a bit of a liability since it increases the complexity of the corner cases, and out-of-order is already forcing to have a rollback mechanism anyways so why not use it to run instructions speculatively past branches? And when your pipeline becomes longer, this means you still have stalls unless your branch delay slot is very long: for instance, if your whole issue stage is 5 cycles, and your CPU runs 3 instructions per cycle, you need a branch delay slot of 15 instructions... kinda a lot, no? That being said, your proposal is interesting, and for a semi-ordered CPU (something in between out-of-order and in-order) would probably be a good idea, and is not too risky because even if it turns out to not be a good idea, it can still probably be implemented fairly easily in an out-of-order CPU. But I don't think I've ever seen any good examples of semi-ordered architectures - once you go past a certain complexity, it becomes simpler to implement the Tomasulo algorithm and register renaming and go full out-of-order. Anonymous wrote: Cache miss is also a big topic. Suppose I have five functions (functions in the mathematical sense), A, B, C, D, E, which can be fully executed in parallel. The calls instructions are laid in the following order: CALL A; CALL B; CALL C; CALL D; CALL E; But only the data for the E function is ready (is on the cache). Would the processor stall, while waiting for the data of A, B, C, D arrive? And, even worse, wait for the completion of A, B, C and D?This is what out-of-order CPUs do! Provided that you don't exceed the limits on how much instructions the CPU can reorder and what situations it can deal with, they're designed to run at least the parts of functions further up ahead that generate memory addresses, which lets you deal with a lot more memory latency. |
Reply To This Message |
Matrix multiplication |
---|
Author: Agner | Date: 2016-07-29 07:51 |
Anonymous wrote:Many algorithms use matrix multiplication, which can greatly benefit from instruction-level parallelism, but the current instructions sets are simply disappointing [...] it's simply not possible to express it nicely with the instructions given by x86 or x64.I wonder if matrix multiplication is so important that it would be worthwhile to implement special instructions that facilitate matrix multiplication. A complete matrix multiply instruction would be far too complex, but instructions for doing the necessary permutations would be realistic. Assume that we have a NxM matrix A and a MxP matrix B and we want to calculate the NxP product AB. The matrixes are stored in row-major order. We can calculate the matrix product in the following way. Copy the first column of A into P identical columns to make a NxP matrix with all columns identical. Copy the first row of B into N identical rows to make a NxP matrix with all rows identical. Multiply the two NxP matrixes, element by element. Do the same with the second column of A and the second row of B, and so on. Now we have M matrixes of size NxP. The result is the sum of these M matrixes. If we have a universal permute instruction then we can copy the rows or columns without problems. However, the complexity of a full permute instruction grows with the vector length squared so this may be too expensive for very long vectors. Instead, we could have a repeat-block instruction which repeats the first sub-vector into a long vector, and a repeat-within-block instruction that copies the first element of each subvector into all elements of their subvector block. These instructions can be used for repeating the rows and columns, respectively. If the vector register is not long enough to hold the entire matrix, then split the matrix into multiple registers, each holding an integral number of rows. This will still be somewhat complicated to implement, but much cheaper than a universal permute instruction. |
Reply To This Message |
Matrix multiplication |
---|
Author: Hubert Lamontagne | Date: 2016-07-29 09:33 |
ARM's simd has instructions that can help matrix multiply and are also useful for a whole bunch of other simd tasks: VMUL, VMLA, VMLS with broadcasted operand - Vector multiply by broadcasted scalar extracted from a vector register (+accumulating and subtracting versions) VTRN - Matrix transpose VPADD - Pairwise add VZIP, VUZP - Interleave/deinterleave vectors VDUP - Broadcast scalar extracted from a vector register to whole vector register |
Reply To This Message |
Matrix multiplication |
---|
Author: | Date: 2016-07-29 10:03 |
There are two problems with matrix multiplication in recent architectures: 1. The biggest problem with matrix multiplication comes from the need to keep many independent accumulators to tolerate pipeline latency. For example, on Intel's Haswell (Xeon E5 v3) processor with two FMA units and a 5-cycle FMA latency, a minimum of 10 accumulators are required. Each of these accumulators must be a full-width SIMD vector in order to use all the FMA hardware. It turns out that 10 accumulators is not a useful number for register blocking of matrix multiplication (5x2 and 2x5 don't provide good re-use), so the optimum algorithms must use 12 accumulators -- leaving only 4 register names for everything else. The optimal blocking uses 3 registers for values that are re-used in non-adjacent code, leaving a single register for handling all of the load-with-broadcast values that are used for the input array that requires transposition. 2. A secondary problem is the absence of transposition support. When going from AVX2 (described above) to AVX-512, the number of accumulators required remains the same, but the wider SIMD registers means that the code must implement 8 load-with-broadcast operations (for the transposed input array) in the same number of cycles that were required for 4 load-with-broadcast operations on Haswell. The only way that this can happen is to use a larger blocking factor for register blocking so that more data re-use occurs and more cycles are available with a free load port. Since AVX2 already required all 16 registers, an AVX-512 implementation clearly requires more -- so it is not surprising that the AVX-512 instruction set provides 32 named registers. The first problem is probably the easiest to solve. A floating-point multiply-add unit with a single-cycle multiply-accumulate mode is described by S. Jain et al., in “A 90mW/GFlop 3.4GHz reconfigurable fused/continuous multiply-accumulator for floating-point and integer operands in 65nm,†VLSID ’10., 2010. This would reduce the number of accumulators to 2 (one per FMA unit) and (in the AVX2 example) would free 10 registers for holding data for re-use. The second problem is definitely more difficult to solve. Although it would be relatively easy to build an FMA unit with a built-in broadcast of a scalar element, this would not help with getting the scalar element(s) into the register(s) in the first place. In the standard form of matrix multiplication, the values that need to be broadcast are collected from strided memory addresses (with a stride equal to the dimension of the matrix). Given that most (all?) recent processors have "wide" (multiple 64-bit word) interfaces at all levels of the cache/memory hierarchy, so there is physically no way to get full bandwidth while extracting only one element per block. A possible route forward would be a SIMD "transpose with broadcast" instruction, taking a single vector of P contiguous elements and broadcasting each element across one of P output SIMD registers. The overall matrix multiplication algorithm would have to be organized so that all of these P elements would be used before being discarded, which restricts the space of possible orderings considerably. I have not worked through the details to see if this leads to a workable solution. A non-SIMD approach based on independent single-lane vector units operating from a scratchpad memory (rather than SIMD registers) might be preferred, but that opens up too many options to think about. Some encouraging developments in high-radix on-chip crossbar technology that would help in any of these approaches to parallelism is reported by Sudhir Satpathy, et al., in a number of publications, including |
Reply To This Message |
Matrix multiplication |
---|
Author: Agner | Date: 2016-07-29 11:03 |
John D. McCalpin wrote:A possible route forward would be a SIMD "transpose with broadcast" instruction, taking a single vector of P contiguous elements and broadcasting each element across one of P output SIMD registers.Hubert argued that multiple outputs is a bad thing for an out-of-order CPU. Consequently, I made the ForwardCom design so that it allows only one output register for each instruction. But it allows very long vector registers. I am thinking about the situation where the vector register is long enough to contain the entire matrix, or at least several rows of it. What I proposed is the same as you described, except that it is all contained in a single vector register. |
Reply To This Message |
Introduction website |
---|
Author: Agner | Date: 2016-08-01 01:48 |
I have made an introduction to the ForwardCom project at
www.forwardcom.info.
I have added a few optional instructions to facilitate matrix multiplication. |
Reply To This Message |
Introduction website |
---|
Author: | Date: 2017-07-17 18:15 |
Great proposal so far with a lot of interesting ideas. May I suggest a couple of instructions: -Bit reverse: Reverse all the bits of a register. This is very useful to enable bit hack for the most significant bits. -LUT base logical ops: Rd, Rs Imm: Rd OPS Rs -> Rd Where OPS is a 8 bits LUTs implementing any three input one output logical expression. -MultiCarryOps: |
Reply To This Message |
Introduction website |
---|
Author: Agner | Date: 2017-07-18 00:28 |
EricTL wrote:May I suggest a couple of instructions:This instruction exists. It is called bit_reverse. LUT base logical ops:Instruction truth_tab2 is a two-input LUT. Instruction truth_tab3 is a three-input LUT. These instructions implement arbitrary functions for boolean vectors. However, the function is applied only to the least significant bit of each vector element. There are two reasons for this limitation. (1) This is the way boolean vectors are represented in ForwardCom. The boolean value is stored in the least significant bit. The remaining bits are option bits, which can be propagated to the result and used for specifying options to e.g. a floating point instruction using the LUT result as a mask. (2) The 3-input LUT can be implemented in hardware by using an 8-bit barrel shifter where the 8 bits of the LUT are shifted right by the count defined by the three input bits. The hardware allready has an 8-bit barrel shifter for each 8-bit vector element so that the implementation is cheap. A LUT function that applies to all bits of a vector will require one 8-bit barrel shifter for each vector bit, which would be more expensive in terms of silicon space. I am not sure I understand what you want to do with the msb, lsb inputs? MultiCarryOps:Interesting idea. But this has no high level language equivalent. It will only be used by programmers who master the use of intrinsic functions or assembly language. A simpler solution is to just leave one unused bit between each bitfield to avoid carry from one bitfield to the next. BTW, I have made the move_bits instruction for efficient insertion and extraction of bit fields. |
Reply To This Message |
Introduction website |
---|
Author: | Date: 2017-07-20 19:19 |
I thought I replied already. Maybe I did not sent it. Agner wrote: Instruction truth_tab2 is a two-input LUT.Let's put aside the shit left or right by one (the immediate lsb and msb where there to feed the shift on both end of the register. ) A 4 bit LUT can be made with 3 multiplexers (mux.) per bit.
The case for 8 bits LUT (universal three-input logical operation) is less clear. The mux is now 8:1 => 14 Tr and the broadcast doubled to 256 Tr => 1152 Tr. To finish on a rather hackish and somewhat silly note In the case of the tiny instruction format the 4 bits LUT can be implemented by using the code for Rs as both the register and the LUT. In that form Clear can be implemented as LUT R0 therefore reclaiming the opcode. Not opcode can be reclaimed as well by LUT R5 (LUT convention b3,b2,b1,b0 -> 0101) |
Reply To This Message |
Introduction website |
---|
Author: Agner | Date: 2017-07-20 23:27 |
I don't like the idea of using the same bitfield as both instruction and register. You will not have a free choice of what register to use, and the out-of-order scheduler will have problems detecting whether it should wait for the value of that register or not. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2016-08-04 01:23 |
Agner, there are a couple of specialized instruction domains that you seem to be pushing to FPGAs or other coprocessors, but I think ForwardCom will suffer from their exclusion. Encryption may be specialized, but it's not optional anymore. Fast encryption is going to be a central requirement of any computing device for decades. So the lack of AES and similar instructions is a disadvantage. You could say that FPGAs can do it, but computers today are expected to be able to do this without extra silicon. FPGAs are not in anyone's Bill of Materials for a smartphone, PC, or server (well, in rare cases for servers), and there's still a lot of doubt over whether there are enough programmers who know how to fully leverage FPGAs. Parsing text and bytecode is a huge part of what computers do every day, so I think a new ISA should take a clean look at how to speed up parsing (and JIT'ing). You don't have any counterpart to the SSE 4.2 string comparisons, and I wouldn't want to punt that to FPGAs either. I know that programs rarely use those SSE 4.2 instructions today, but that sort of thing will become more common in the future as there is more pressure on energy use in cloud computing and battery life on mobile. At some point someone is going to build a clean-sheet web server that fits entirely in the L2/L3 cache. Have you been able to guess at performance improvements from the new ISA? Holding everything else equal, the foundry technology, etc. would ForwardCom be faster or more efficient than a comparable Broadwell-E? I'm thinking of your new loops, for example, and other features. What sense do you have of the performance difference? |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2016-08-04 14:37 |
Joe Duarte wrote:
Fast encryption is going to be a central requirement of any computing device for decades. So the lack of AES and similar instructions is a disadvantage.You are probably right that AES instructions would be useful. Parsing text and bytecode is a huge part of what computers do every day, so I think a new ISA should take a clean look at how to speed up parsing (and JIT'ing). You don't have any counterpart to the SSE 4.2 string comparisons, and I wouldn't want to punt that to FPGAs either.Text strings are rarely so long that it really matters. It will take less time to parse a string than to read it from an SSD anyway. x86 AES and SSE4.2 instructions are still only 128 bits while most other vector instructions are 256 - and soon 512 - bits. This is because they are costly to implement. In most cases, it will be faster to parse a string using simple compare instructions with the maximum vector length than a SSE4.2 instruction limited to 128 bits. Have you been able to guess at performance improvements from the new ISA? Holding everything else equal, the foundry technology, etc. would ForwardCom be faster or more efficient than a comparable Broadwell-E? I'm thinking of your new loops, for example, and other features. What sense do you have of the performance difference?Instruction fetch and decoding is a bottleneck on all Intel processors. They are putting a micro-operation cache after the decoder to fix this, but each micro-op uses many bits so physical distances on the silicon chip is a limitation. The ForwardCom instruction format will be more efficient than this because it can decode fast and it is compact. x86 has a lot of suboptimal designs because it needs to preserve backwards compatibility with a long legacy of short-sighted designs and patches. They even keep supporting undocumented opcodes from the old 8086. It is difficult to predict how much you can gain from having longer vectors. Some applications will be unable to use long vectors. The special vector loop will be an advantage whenever the loop count is not certain to be a multiple of the vector size. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Hubert Lamontagne | Date: 2016-08-05 12:25 |
Joe Duarte wrote:Agner, there are a couple of specialized instruction domains that you seem to be pushing to FPGAs or other coprocessors, but I think ForwardCom will suffer from their exclusion.What about a specialized AES encryption device, accessed using memory mapped I/O like other peripherals like DMA controllers, timers etc? Wouldn't that do the job, and keep the overall complexity lower? The OS would probably have to make sure no 2 programs are using the AES device at the same time but otherwise it would be pretty straightforwards: set the key, push the data through. Afaik that's close to how ARM does it. Joe Duarte wrote: Text parsing is hard to speed up. It tends to be made up of loops full of loads, stores, and conditional branches, and not very many of the kind of mathematical operations that can be sped up. It's inherently serial in nature. Afaik, there are only 2 approaches for this at the moment: - If your load is to run a whole bunch of programs at the same time (= it's a server), you can use the UltraSparc design: a really simple in-order CPU that uses tons of hyper-threading, and every time something stalls just switch to the next thread. The simple in-order core takes a lot less space so you can cram a whole bunch of them on the same chip, which lets you compensate for speed deficiencies. - Otherwise, just use a general purpose out-of-order CPU that's as fast as possible. x86 fits this profile perfectly (which is why it's so popular still) and ARM is evolving towards that too. This is part of why RISC is popular: starting with a classic 4-issue out-of-order CPU like the Dec Alpha 21264, it's very hard to find new instructions to add that will actually speed it up at text processing and not be overly specific, and afaik the stuff that speed it up the most are just making branch prediction and data cache as fast as possible (look at x86 evolution for a perfect example of this). Truth is, if your text processing program has the kind of stuff that doesn't run fast on x86, chances are, it's doing stuff like defeating the branch predictor or data caching, and it's probably not going to run fast on ANY hardware. In fact, there are even good chances that it's not even filling the 4-instructions per cycle limit that modern Pentium-2 descendant cores can do. Bytecode interpreters fall into a similar category of load-store-jump code and are hard to speed up for similar reasons. A way to put it is that while you can use parallelism to increase the number of calculations per second, you can't use it to increase the number of decisions per second, which tends to be limited by electrical constraints (clock rate, number of cycles of latency on the data cache, number of cycles of latency on issue - which can be reduced with a simpler instruction encoding but not past a certain limit). I guess there's the special exception of the CPU designed to use dynamic translation - like the Transmeta Crusoe or NVidia's Denver (which in theory the Nintendo NX could use). The catch is that even then, these cores tend to perform good on loopy-mathy code, but not very well on loady-story-jumpy code (which is why the Denver performs better than ARM cores on benchmarks but worse in real world applications), and bytecode tends to be used for loady-story-jumpy code. It also doesn't help that high level programming languages tend to have all sorts of features that make the language inherently slow (one example that comes to mind is Python's data structures that I think use pointers to pointers just for getting to an object's data) and hard to dynamically compile (because they allow dynamic typing hacks, which makes it hard to protect the interpreter against corner cases).
That's a pretty high target! To get anything close to this performance, you'd need to run ForwardCom at about 4 instructions per cycle, and considering the complexity of ForwardCom instructions, this would require a pretty large and complex CPU core. My hunch is that it would run at about the same speed as x86, but simply require less design time to implement all the crazy instruction unpacking and 286-era segmenting intricacies. I don't think ForwardCom would scale very well past 4 instructions per cycle, but to be fair I don't think any instruction set does (they tried to do it with Itanium and failed). Imho most of the ideas and innovation and potential speed gain in ForwardCom is in the vector instructions. This means that it has good potential for numeric-heavy applications (video games! physics!), if you use vectors, which generally means you have to program in C++ and use intrinsics (or use assembly). A few C++ compilers can auto-vectorize (Intel's compiler, MSVC 2012 and later and GCC with all speed options turned on) but it has to be "trivial loops" - if any of the memory accesses alias any other memory access in the loop, or there's any sort of carried state, then it's impossible to vectorize and the compiler will issue normal instructions (ie scalar integers and floats). Basically the compiler has to prove that each loop iteration is 100% independent from other iterations, which generally only happens in numeric-heavy code looping over a buffer and applying exactly the same mathematical operation on each iteration and data coming in neat linear arrays. For higher level languages (anything dynamic typed like javascript or pyton etc), those are not designed for vectorization in any way so it would probably run 100% scalar. For middle level languages (Java, C#), those in theory are vectorizable but only in the same cases that C++ is vectorizable (trivial loops doing numerical computation), and generally people use C++ (video games) or Fortran (scientific computing) for those applications so there's not really any point AFAIK. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2016-08-06 02:44 |
Hubert Lamontagne wrote:
What about a specialized AES encryption device, accessed using memory mapped I/O like other peripherals like DMA controllers, timers etc? Wouldn't that do the job, and keep the overall complexity lower? The OS would probably have to make sure no 2 programs are using the AES device at the same time but otherwise it would be pretty straightforwards: set the key, push the data through. Afaik that's close to how ARM does it.That's a pretty expensive solution. I wouldn't recommend a clone of the x86 AES instructions either, because they have long latencies which is bad for the smooth pipeline structure of ForwardCom. I would prefer to split up the AES operations into small unit operations with single-clock latency. If the length of vector registers is at least 2048 bits and we have full byte permute instructions then you can have a complete 256 byte lookup table contained in one vector register and do the table lookup with a byte permute instruction. The whole AES algorithm could then be implemented with general permute and XOR instructions without any special-purpose instructions. This would be a flexible solution that could be adapted to other encryption algorithms as well. The price is a large permutation circuit. Text parsing is hard to speed up. It tends to be made up of loops full of loads, stores, and conditional branches, and not very many of the kind of mathematical operations that can be sped up. It's inherently serial in nature.The branch misprediction penalty is equal to the length of the pipeline. The best way to speed up unpredictable branches is to have a short pipeline or to speculatively execute both sides of a 2-way branch simultaneously. Some text processing jobs can be vectorized, using masked operations etc. UTF-8 conversion can be done with the compress_sparse and expand_sparse instructions in ForwardCom (sorry, they are poorly described in the manual). Bytecode interpreters fall into a similar category of load-store-jump code and are hard to speed up for similar reasons. A way to put it is that while you can use parallelism to increase the number of calculations per second, you can't use it to increase the number of decisions per second, which tends to be limited by electrical constraintsForwardCom has a multiway branch instruction which is useful for byte code interpreters, but I think the way to speed up byte code is JIT complation. Joe Duarte wrote: Have you been able to guess at performance improvements from the new ISA? Holding everything else equal, the foundry technology, etc. would ForwardCom be faster or more efficient than a comparable Broadwell-E? I'm thinking of your new loops, for example, and other features. What sense do you have of the performance difference?Hubert Lamontagne wrote: That's a pretty high target! To get anything close to this performance, you'd need to run ForwardCom at about 4 instructions per cycle, and considering the complexity of ForwardCom instructions, this would require a pretty large and complex CPU core.The target of ForwardCom is indeed to beat the performance/price ratio of existing systems. It is designed with a simple and consistent pipeline structure. It is doing one thing taking one clock cycle at each pipeline stage, except for multiplication, division, and a few other necessary things. We should not forget, however, that in many applications the bottleneck is not the CPU but cache, memory access, disk access, or network speed. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2016-08-08 06:00 |
Hello Agner, what about an hardware accelerated Decimal Floating Point Unit as present in IBM Power CPUs? In particular for business applications the decimal type is necessary but now being done in software is 100x slower that float / double! |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2016-08-08 10:26 |
fanoI wrote:
what about an hardware accelerated Decimal Floating Point UnitThe x86 instruction set has instructions for integer decimal arithmetics. However, these instructions are removed in the 64-bit mode because they are never used. The normal procedure is to convert decimal to binary, do calculations in binary, and convert the result to decimal. These conversions can be vectorized to speed up the process. You can find code examples for this in my vector class library. Anyway, decimal numbers are for human use and the bottleneck will always be the human, not the computer :-) |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2016-08-09 13:51 |
Agner wrote:fanoI wrote:I think the instructions you are referring were for the old BCD mode I was referring to the actual Decimal Floating Point:what about an hardware accelerated Decimal Floating Point UnitThe x86 instruction set has instructions for integer decimal arithmetics. However, these instructions are removed in the 64-bit mode because they are never used. https://en.wikipedia.org/wiki/Decimal_floating_point they exist in C as GCC extensions (_Decimal32, _Decimal64 and _Decimal128), in C++ as decimal32, decimal64 and decimal128 and in C# there is the Decimal type. To do calculation with these types you need to resort to do them in software in x86 CPU (in Power CPU IBM has added a Decimal Floating Point Unit do this in hardware) they are used for financial calculations where powers of 10s are more important that power of 2s :-) |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2016-08-08 18:41 |
Hubert Lamontagne wrote:Text parsing is hard to speed up. It tends to be made up of loops full of loads, stores, and conditional branches, and not very many of the kind of mathematical operations that can be sped up. It's inherently serial in nature. Afaik, there are only 2 approaches for this at the moment:I'm thinking of approaches Parabix: http://parabix.costar.sfu.ca/ One of their papers: https://www.cs.sfu.ca/~ashriram/publications/2012_HPCA_Parabix.pdf I guess it just comes down to vector instructions and their parallel bitstreams approach. (Another way to boost it would be multiplexed streams, but that's a different plane of the architecture than the ISA.) Hubert wrote: That's a pretty high target! To get anything close to this performance, you'd need to run ForwardCom at about 4 instructions per cycle, and considering the complexity of ForwardCom instructions, this would require a pretty large and complex CPU core. My hunch is that it would run at about the same speed as x86, but simply require less design time to implement all the crazy instruction unpacking and 286-era segmenting intricacies. I don't think ForwardCom would scale very well past 4 instructions per cycle, but to be fair I don't think any instruction set does (they tried to do it with Itanium and failed).Well, Broadwell-E is a high target right now, but the target is moving. It's going to be a low target in 2020. Of course Agner's not necessarily thinking in terms of marketable silicon, but more toward a useful reference and engineering exercise. I'm just wondering what kind of performance wins are possible. Another way of framing it: If a well-funded team rolled up their sleeves and built a new general purpose CPU architecture from scratch, including a new ISA and tape out and all that, using TSMC or Samsung's 16/14 nm process, could they beat Intel/Skylake? Maybe Intel is wringing all the performance out of current processes that the physics allows, and the legacy technical debt is only a design cost overhead. They seem to be treading water since Haswell, so I sometimes wonder if a better architecture would make a difference or if the wall is more fundamental than Intel. By the way, do you think Itanium was just too early, with too little tooling? Does an out of order CISC have any inherent advantage over VLIW or EPIC? It seems goofy to send one instruction at a time. I haven't found any deep-dive post-mortems on Itanium, just vague claims that no one could build a good compiler for it, or that the physical product just wasn't that fast. I think we're going to see some big advances in compiler technology in the near future, with powerful platforms like IBM Watson doing things a laptop-bound compiler can't do. By the way, regarding the idea of specialized AES encryption device, Intel offered QuickAssist on the v2 and v3 Xeons: http://www.intel.com/content/www/us/en/embedded/technology/quickassist/overview.html They don't seem to offer it on the Broadwells. It's a dedicated accelerator for encryption and compression, but I've never found a detailed review anywhere. Do either of you know more about it? I'm surprised it didn't get more attention. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Hubert Lamontagne | Date: 2016-08-09 17:49 |
Joe Duarte wrote:
I'm thinking of approaches Parabix: http://parabix.costar.sfu.ca/ Hmm, that's an interesting technique. Very reminiscent of how they used to do graphics on the Amiga and in EGA (which worked well for 2d but had to be abandoned once they started doing 3d). Joe Duarte wrote: Hubert wrote: That's a good question. I think that's exactly what ARM is trying to do! In fact, there are a whole bunch of ARM licensees that are taking a stab at this, and imho it's proving to be a little bit harder than expected... The catch is that once you have a cpu as big as a Broadwell, it has tons of stuff in it: in particular, a very complex memory architecture with a whole laundry list of features: aggressively pipelined multibank data cache with 2 read ports and 1 write port, TLB, out-of-order non-blocking everything with a memory ordering buffer, prefetching, multi-core cache coherency protocol, handling of misaligned addresses... This is basically almost totally independent of the architecture - you could implement all the same stuff on an ARM or MIPS or POWER core and it would be just as complex. If you factor in the other stuff like aggressively pipelined secondary units like the FPU and SIMD unit, heavy branch prediction, hyper threading... this is all stuff that's basically the same on other architectures. The only thing that x86 changes is the front end. The front end part of a CPU isn't THAT large and complex compared to all the rest on modern cores with so many features. And if, say, some new silicon process changes the distribution of bottlenecks so that faster instruction decoding becomes more important again, then Intel can simply put in a larger or broader micro-code cache, or maybe even go back to the instruction trace cache like on the Pentium 4. (ok, the P4 is a bit reviled, but the truth is that it still held on pretty good compared to the other well designed architectures of the time) Joe Duarte wrote: By the way, do you think Itanium was just too early, with too little tooling? Does an out of order CISC have any inherent advantage over VLIW or EPIC? It seems goofy to send one instruction at a time. I haven't found any deep-dive post-mortems on Itanium, just vague claims that no one could build a good compiler for it, or that the physical product just wasn't that fast. I think we're going to see some big advances in compiler technology in the near future, with powerful platforms like IBM Watson doing things a laptop-bound compiler can't do. I think it boils down to a couple things... The first generation Itanium came out late. Because of this, RISCs and x86s of the day had an extra generation to catch up, which means that the first generation Itanium totally failed to impress. Second generation came out pretty fast and sorta fixed this, but Itanium could never quite reach up to the speed of the fastest RISCs and x86s. One guy said that it was never as fast as HP's previous architecture (PA-RISC, totally a classic RISC). Second thing, Itanium was basically betting that out-of-order would fail to deliver enough speed gains to keep the RISCs and x86s relevant. It turns out that out-of-order delivers speed gains at just the right place: memory accesses. Real life programs have lots of memory accesses going on, which gives out-of-order a fairly hard-to-beat edge. It's just easier to deal with memory latency with an out-of-order pipeline than with the Itanium's insane ALAT thing with checked speculative loads and stores and whatnot. Late Itaniums can issue 12 instructions per cycle, which x86 will probably never be able to do, but even though x86 only issues 4 instructions per cycle, 3 of those can have a built-in memory access which can be crazy-reordered by the aggressive out-of-order pipeline, and it turns out that that's enough to smoke the Itanium. There's a question on Quora about exactly this and it has tons of interesting answers: Why did the Intel Itanium microprocessors fail? |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2016-08-11 19:09 |
Hubert Lamontagne wrote:
The catch is that once you have a cpu as big as a Broadwell, it has tons of stuff in it: in particular, a very complex memory architecture with a whole laundry list of features: aggressively pipelined multibank data cache with 2 read ports and 1 write port, TLB, out-of-order non-blocking everything with a memory ordering buffer, prefetching, multi-core cache coherency protocol, handling of misaligned addresses... This is basically almost totally independent of the architecture - you could implement all the same stuff on an ARM or MIPS or POWER core and it would be just as complex. By the way, this point and your other point on dynamic translation, Transmeta, etc. remind me of the recent buzz around Soft Machines. I don't think I've seen any discussions of Soft Machines here. In particular, I was reminded of Ian Cutress saying in this article that Soft Machines was more than ten years behind Intel on speculation and branch prediction: http://www.anandtech.com/show/10025/examining-soft-machines-architecture-visc-ipc That really struck me as illustrating how much accrued work and research has gone into a modern Intel CPU, and that an ISA might be a relatively minor piece. While I have you, what goes into implementing new instructions? What's the physical manifestation of an instruction? Is there one, or is it typically just microcode? I'm always amazed by how more instructions constantly added, and that they consistently speed things up. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2016-08-12 00:11 |
Joe Duarte wrote:what goes into implementing new instructions? What's the physical manifestation of an instruction? Is there one, or is it typically just microcode? I'm always amazed by how more instructions constantly added, and that they consistently speed things up.My idea is not to use microcode in ForwardCom at all (and no microcode cache, of course). Intel and AMD use microcode only for complicated instructions that use many µops, as microcode is quite inefficient. There are thousands of different instruction codes in x86 and most of them are implemented directly in silicon. That must be a lot of silicon! I am sure that this is slowing things down. Some of these instructions were added mainly for marketing reasons, providing no real advantage. Many instructions are obsolete, but they keep supporting them for backward compatibility. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Hubert Lamontagne | Date: 2016-08-12 18:13 |
Joe Duarte wrote:what goes into implementing new instructions? What's the physical manifestation of an instruction? Is there one, or is it typically just microcode? I'm always amazed by how more instructions constantly added, and that they consistently speed things up. Afaik, generally, there are '4 levels of implementation' for chips: - FPGA: You write your logic in Verilog or VHDL, compile it to the FPGA, then run it. - FPGA to ASIC conversion: This is basically an ASIC with a bunch of FPGA-like cells, but where the metal interconnection layer is customized. This has much lower fixed costs than an ASIC, but there's only a speed gain of about 30% and a cost gain of about 50% over FPGAs. - ASIC: The Verilog / VHDL code is compiled to a proper circuit layout. Most chips use this. This makes much faster chips and has a much lower unit cost, but the fixed costs are very high ($1million+, need lots of units). There needs to be lots of verification. - ASIC with additional custom-layout parts: Same as above but some parts are custom-laid out rather than automatically. AFAIK Intel does this for the most speed-critical parts of its CPUs (like the L1 Data Cache) and it is part of its advantage over AMD (I've read somewhere that AMD has stopped doing custom layout). More info: http://electronics.stackexchange.com/questions/7042/how-much-does-it-cost-to-have-a-custom-asic-made Within the chip, instructions that have the same memory access pattern and latency often are interpreted more or less the same, with the ALU doing the different possible arithmetic operations in parallel and then having a multiplexer at the end that selects which result it uses. In Verilog, it looks something like this: case (alu_op) 0: alu_out <= b; 1: alu_out <= a + b; 2: alu_out <= a - b; 3: alu_out <= a & b; 4: alu_out <= a | b; 5: alu_out <= a ^ b; 6: alu_out <= a >> b; 7: alu_out <= a << b; 8: alu_out <= a >>> b; 9: alu_out <= a >= b ? 1 : 0; 10: alu_out <= ($signed(a) >= $signed(b)) ? 1 : 0; default: alu_out <= alu_out; endcase Agner wrote: My idea is not to use microcode in ForwardCom at all (and no microcode cache, of course). Yeah, no microcode is a nice goal to strive for. Though that requires a certain amount of "instruction set discipline" : no long latency or multi-step or multi-µop operations, or operations that load/store an excessive number of registers or do multiple memory accesses, or special-case exceptions, or instructions that can have all sorts of strange side effects deep in the pipeline and have to be specially executed in-order (like CPU mode manipulation using the flag registers for instance)... |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: grant galitz | Date: 2016-08-22 22:53 |
Pardon my slight skimming of the contents of this thread. I was wondering if adding the ability for the amount of registers visible to be user programmable, like THUMB on ARM, was overviewed prior to forwardcom's writing. I could see special branch instructions upgrading or downgrading the number of registers to fine tune the instruction caching/fetching vs work done per instruction as a still useful tool. I'm also curious as to the implication this would have on context switching overhead, and any hardware logic issues. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2016-08-22 23:59 |
grant galitz wrote:I was wondering if adding the ability for the amount of registers visible to be user programmable, like THUMB on ARM, was overviewed prior to forwardcom's writing.The instruction code template has 5 bits for each register operand. This gives 25 = 32 registers. Adding more registers would make the instructions bigger. Fewer registers would be wasteful. If the number of registers is variable then all the software, including function libraries and system code has to be recompiled every time the number of registers is changed to optimally use the number of available registers. I think it would be too complicated to reconfigure register space as cache or something else. This would require a lot of extra wiring between otherwise unrelated parts of the chip. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2016-08-24 00:33 |
grant galitz wrote:Pardon my slight skimming of the contents of this thread.One reason ARM has a 16bit instruction mode (THUMB) is that not all ARM cores have an instruction cache. If you're loading instructions from RAM or slow ROM (sometimes with a 16bit bus), you really really want instructions to be as small as possible because it's so costly to load them up. THUMB mode was often used on the GBA when running code from the slower RAM or ROM, which had a 16bit bus. This is also why the SuperH has 16bit instructions as well. Having an explicit switch between ARM and THUMB mode simplifies the pipeline because it removes the need to have a prefetch buffer to handle unaligned instructions. Later on, they added THUMB-2 mode which removed this limitation and would let the program mix up 16-bit and 32-bit instructions. The catch with small instructions is that 16-bit THUMB code runs significantly slower than ARM on modern cores that have instruction caches and large, 64bit buses, since so many instructions need to be broken up. And also, THUMB-2 code doesn't run faster than ARM in most applications, in spite of taking less instruction cache. One guy comparing the linux kernel size and the Coremark benchmark has: vmlinux-ARM32: text 8.4mb, data 463kb, bss 827kb, total 9.7mb The best is -O3 -funroll-loops -marm -march=armv5te -mtune=cortex-a8 (Source) So being able to lower the number of registers below 32 to shrink opcodes on a CPU design that's targetting large machines with lots of ram doesn't seem too relevant to me... Now, increasing the number of registers is another question and I'm curious about what people have to say about that. |
Reply To This Message |
ARM with scalable vector extensions |
---|
Author: Agner | Date: 2016-08-23 00:24 |
Two people have sent me this link. Yesterday, ARM announced a future extension with variable vector length: ARM Announces ARM v8-A with Scalable Vector Extensions.
|
Reply To This Message |
ARM with scalable vector extensions |
---|
Author: | Date: 2016-08-23 06:11 |
Agner wrote:But the hardware seems much more complicated than ForwardCom. The ARM extension requires that the hardware scheduler is able to split long vectors into shorter ones or merge short vectors into longer ones. The compiler would also be quite complicated, I think. The special ForwardCom loop type is much more elegant and efficient with no limit to vector length, see forwardcom.info.Whereas splitting wider vector/SIMD instructions into narrower execution units is quite straightforward (just like early SSE/SSE2 in PIII/P4), merging narrower vectors for wider units really seems way more complex. |
Reply To This Message |
ARM with scalable vector extensions |
---|
Author: | Date: 2016-08-26 11:21 |
Agner wrote:Two people have sent me this link. Yesterday, ARM announced a future extension with variable vector length:Well, the ARM version is also somewhat more aggressive since it tries to deal with scatter-gather, carried loop dependencies (aka feedback) and data-dependent loop exits. Agner wrote: The ARM extension requires that the hardware scheduler is able to split long vectors into shorter ones or merge short vectors into longer ones.Hmm, I'm re-reading the article, and now I'm really wondering what they actually mean about how it adapts to different vector sizes. Either: - It's a software scheme (in which the software knows the vector size, and if you have, say, 512 bit vectors, then the top 1536 bits always read as 0) and the article is slightly wrong in the way it presents things. - Or, vectors can always be 2048 bits and it simply uses more micro-ops on narrower processors (this is already how NEON works) - this relatively straightforwards to implement, although this leaves the issue of having to deal with oversized register files (I'm not up to date on how costly this is in hardware). - Or maybe it's some other crazier scheme? I can't quite tell. Agner wrote: The compiler would also be quite complicated, I think.Auto-vectorization is always complicated! I'm not sure exactly why the ARM version is more complicated than ForwardCom though. Yes, the ARM version requires a more complex calculation to get vector length and turn it into lane flags, although there's a good chance this extra calculation can run in parallel with other stuff and essentially be free. The ARM version doesn't require transforming the loop index 0..n-1 into a negative index from end of buffer, which I'm sure is doable in SSA form but might get complex for loops with lots of references to the indexing variable. Agner wrote: The special ForwardCom loop type is much more elegant and efficient with no limit to vector length, see forwardcom.info.It's true that the ARM version doesn't have the insight that if you use reverse indexing from the end of the buffer, you can clamp the index and use it as a count for the number of items per iteration. If I'm reading the news post correctly, I think the ARM idea is that they use the lane predication flags as vector sizes, so they probably need a couple of specialized instructions to produce the correct clamped item counts and masks. But they're doing both predication and variable vector size using the same hardware, which I guess is a good thing. Maybe they're using general-purpose integer registers as lane masks - that would explain the 2048bit size limit (64bit register = 64 lane flags * 32bit float per line = 2048bits). The load/store unit needs to know the lane masks, so they need to be known early in the pipeline, and I don't think it's using too many general-purpose register file ports. |
Reply To This Message |
ARM with scalable vector extensions |
---|
Author: | Date: 2016-12-20 07:22 |
Agner wrote:Two people have sent me this link. Yesterday, ARM announced a future extension with variable vector length:Most of the HotChips yearly presentations are made publicly available at each late year. Now you can find further details on SVE at : www.hotchips.org/wp-content/uploads/hc_archives/hc28/HC28.22-Monday-Epub/HC28.22.10-GPU-HPC-Epub/HC28.22.131-ARMv8-vector-Stephens-Yoshida-ARM-v8-23_51-v11.pdf |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2016-09-05 01:53 |
One recurrent question: How will ForwardCom run Linux's mmap function? |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2016-09-05 05:12 |
You can map a virtual address space as unused, read only, write only, or read/write. In this way you can generate traps on first read and first write to a block of memory. If the mapped file is bigger than available RAM and you are writing in a random fashion then it will be more likely to flush a dirty part of the area to the file than to fragment the memory. The number of memory mapped files you can have is obviously limited by the size of the memory map. I wouldn't recommend the use of memory mapped files with ForwardCom. Now that solid state disks are becoming popular (and bigger) there is hardly any point in using memory mapping for the purpose of minimizing disk head movements. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2016-09-05 18:33 |
There's more to mmap: - It's can be used to allocate memory, page by page (for instance as a back-end for malloc()). |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2016-09-06 08:51 |
Hubert Lamontagne wrote:There's more to mmapI was not aware of these details. Many of the applications of mmap you mention look like hacks in my opinion. A redesign of the memory management system should offer genuine solutions to these problems rather than hacks. Malloc should be implemented in a way that matches the new memory management system. Video ram should be accessed only through a device driver. This device driver may share video memory with an application only if the OS allows an application to monopolize the screen. Sharing of memory between processes is described in the ForwardCom manual. It should be done in a standardized way for security reasons. Demand paging and memory swapping to disk are separate problems that should have dedicated solutions without using mmap. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2016-09-06 16:19 |
I don't think you will be able to remove all usages of mmap(). Memory mapping is very convenient and if a microarchitecture doesn't allow for it won't be used much for general purpose computing. Programmers are already familiar with it and created a lot of utilities that use it. For example, you couldn't port most of the general purpose OSes to such an architecture, since they are based on virtual memory (there is a special fork of linux called uclinux [1] that doesn't need an MMU, though it's specialized for embedded solutions). For example, 3D graphics APIs have a memory buffer map functions that are used to map part of GPU memory into the process address space. Since this part is already exclusive to the application (it has been allocated previously), there is no harm in bypassing the device driver (which performs the map) and it lets the user perform reads and writes with higher performance. A solution with a shadow buffer that is copied around when needed is possible, but would reduce performance. A mechanism used by the Mill CPU family uses a protection data that maps which memory is to be accessible by which application [2]. It's stored in a PLB (protection look-aside buffer) that, unlike TLB, is not on a critical path since it only has to prevent a failing writes from affecting other CPUs and make failing reads cause a fault - no address translation is performed. Maybe such a mechanism would be a good compromise here? The OS (or any other entity) would be able to let another one access its memory on demand. Mill also has a memory translation table, but the TLB sits outside of the core, near a memory controller. The mapping is global, not per-process (no TLB flush on context switch needed) and does the translation only when an access goes to the memory, at which point the latency is already high so making the TLB fast is not really that important. All the data in the caches is virtually addressed, virtually tagged. But I repeat myself here. We could come up with a better solution instead, though it doesn't appear that it will be simple. [1] https://en.wikipedia.org/wiki/%CE%9CClinux |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2016-09-06 22:41 |
Ok I checked, there are 2 OS calls to allocate RAM in Linux: sbrk() and mmap(). They serve as a back-end to malloc(). The "heap" segment shown on some Linux memory map diagrams is the section allocated by sbrk(), and the rest of user-allocated ram is allocated using mmap(). The current default implementation of malloc() is ptmalloc2 and it uses both sbrk() and mmap(), but in different scenarios: |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2016-09-07 00:10 |
Hubert Lamontagne wrote:Ok I checked, there are 2 OS calls to allocate RAM in Linux: sbrk() and mmap().Thanks for the clarification. sbrk would be easy to implement in ForwardCom. And it could be thread safe because each thread has its own memory map and its own private memory in ForwardCom. Everything that causes a heavy fragmentation of memory for the same thread is a no go. In some applications, the limitation can be circumvented by making multiple threads - one for each block of shared memory. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2016-09-07 17:18 |
Agner wrote:And it could be thread safe because each thread has its own memory map and its own private memory in ForwardCom.Ah, but if your threads don't share memory, then they're not threads, they're separate processes, which is a different thing! |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2016-09-08 00:39 |
Hubert Lamontagne wrote:if your threads don't share memory, then they're not threads, they're separate processes, which is a different thing!ForwardCom has a special security feature for threads. A process can have a parent thread and several child threads. All child threads have access to the memory of the parent thread. The stack, heap and thread-local storage of a child thread is private by default, not accessible to its parent and siblings. This can be useful if the process is servicing several clients. Communication between threads - e.g. a semafor - must use the parent's memory space. Thread memory is shared only if this is necessary for compatibility with legacy software. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2016-09-08 17:24 |
Agner wrote:ForwardCom has a special security feature for threads. A process can have a parent thread and several child threads. All child threads have access to the memory of the parent thread. The stack, heap and thread-local storage of a child thread is private by default, not accessible to its parent and siblings. This can be useful if the process is servicing several clients. Communication between threads - e.g. a semafor - must use the parent's memory space.If you want to port anything that uses threads on a POSIX OS (which all use the pthreads library) or on Windows, the expected behavior is that all threads share the same memory yes. For instance, if multiple threads can modify a std::vector (protected by a mutex), then if the vector changes size and its memory is reallocated, all threads have to see the newly allocated memory regardless of which thread caused the size change. Since std::vector memory allocations ultimately use malloc(), this means that malloc() has to return memory which is visible to all threads by default. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Commenter | Date: 2016-09-07 19:28 |
I don't consider myself well informed on this topic, so please excuse me for any mistakes. > there is hardly any point in using memory mapping for the purpose of minimizing disk head movements I've actually never heard of mmap being useful for that purpose? mmap is useful for reading files as it avoids an extra memory copy. When a file is mapped, the userland application essentially can read directly from the kernel's disk cache buffer. On the other hand, with standard read() calls, the kernel has to make an additional copy into the userland supplied buffer. This works similarly for write() calls. In fact, I'd imagine that with faster NVRAM style devices, mmap may allow the file to be read directly off the I/O device without ever being read into system RAM. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2016-09-08 13:04 |
Commenter wrote:In fact, I'd imagine that with faster NVRAM style devices, mmap may allow the file to be read directly off the I/O device without ever being read into system RAM.Actually, this is what Linux DAX infrastructure does. It allows you to map nvram directly to the process's address space. |
Reply To This Message |
Paging |
---|
Author: Kurt Baumgardner | Date: 2016-09-09 23:40 |
I have received this mail from Kurt Baumgardner and I think it belongs on the forum so that others can join the discussion: Hi Agner, My name is Kurt Baumgardner, and I've been reading your processor design draft. Very impressive! Quite obviously a labor of love. You've done your research, which is also obvious from the rest of your site. I highly respect your work and fndings. Please understand, I don't mean to be critical. But I had to mention my worry about the decision to drop paging from the memory model, which I think is going to be missed. In a handful of 'controversial' choices, you provided a logical rationale for going with one method vs. another, but, either I missed it, or you didn't go into detail about why no paging. I think it is related to your application memory map model, which, if I read correctly, is a simpler (and possibly tighter) app-based, "just what's needed" model. Such a model runs counter to a typical paging scheme, where most mallocs receive pages vs. arbitrary-sized blocks. I will be the first to admit that I must read that section a few more times And, even then, it's a deep subject to follow. Regardless, I feel that, from what I understand so far, the memory model might be a bit too simple. You described an idea of the compiler calculating the needed stack size, and/or profiling to attempt to determine the needs. And, this is made easier with the 2 stack approach (which I really like). I understand this pre-calculation approach for optimization, but is its purpose also to help reduce complexities that would make your memory model seem less atractive? (I'm not sure if I described that understandably). Let me take a step back. My biggest issue/fear/disappointment is with the removal of paging. It always nice to fit everything in memory (64Mb is enough for anyone, right? , but you can't always do it. With all the ideas of future compatibility, like with software automatically taking advantage of larger vectors, the loss of paging seems like a huge step backwards, as it seriously limits machines without large amounts of installed memory. I know of a small percentage of server apps that allocate memory with disregard for actual physical memory size, depending on and expecting the paging hardware and software to handle the otherwise messy details of managing extremely large datasets. In my opinion, 2 of the most important inventions of computing are: #1: using free memory as a disk cache, and #2: built-in automatic paging. There's really no other practical way to provide apps the freedom to allocate memory as needed. Apps have no business trying to home bake their own paging scheme. Without automatic paging, all the work you did for future compatibility is wasted. Why? Because developers will not write the big programs that could take advantage of future improvements, because they're afraid of running out of memory! I know the goal is to simplify the system, for elegance, for cost, for power efficiency, for performance, and many other good reasons. But, your future chip must at least be able to support the hugely-successful and useful capabilities of today's processors. The loss of paging cripples the system severely as a whole. Or, maybe I'm missing something huge. With the current spec, would you simply have to crash a launching (or running) program with an "Out-of-memory" error? Memory is handled so well currently that many programmers don't even test for the possibility of memory allocation to fail (which is awful, but also amazing that they can get away with it). Thanks for your time reading, and, if you have a minute or two, please let me know what you think, or your rationales, or your thoughts. If you wish, this text could be placed into your forum. Have a nice day! By the way, I *really* like the idea of programmer-built extensions to the instruction set. I would love to have the following capabilities: 1. Software gate connection (direct gate array access). (one can dream Kurt |
Reply To This Message |
Paging |
---|
Author: Agner | Date: 2016-09-10 00:18 |
Kurt Baumgardner wrote:my worry about the decision to drop paging from the memory model,The current systems have a very complicated memory paging systems with huge two-level page tables which are partially cached in the on-chip TLB. This system is complicated and using a lot of resources. My goal with ForwardCom is to see what can be gained by a complete redesign. The paging system seemed an obvious candidate for simplification so I got the idea of replacing the huge number of fixed-size pages with a small number of variable-size pages. It was just an idea that I wanted to discuss, and as you can see it has already generated a lot of discussion. This memory map system will only work if we can avoid heavy memory fragmentation, but if it works it will be a great advantage. The memory map will be so small that it can be contained entirely on the chip. You can have a separate memory map for each thread and you can easily switch memory map on system calls and task switches. An application can still allocate memory dynamically, but preferably in a way that doesn't cause too much fragmentation. If an application allocates a huge amount of memory, not knowing how much it will actually need, then the OS may just allocate virtual memory in the first place. When physical memory is needed, it will be allocated in blocks of exponentially growing sizes (each new block will be bigger than the previous one) in order to minimize memory fragmentation. As a last resort, the system may move data around in memory if a contiguous virtual memory space has become mapped to a too fragmented physical memory. Moving data in memory may be fast because the data path to cache (and to main memory?) can be as wide as the maximum vector length. The advantage of having variable size memory map entries is that you will need much fewer entries. The disadvantage is that you need a lot of comparators to check all entries in one or a few clock cycles. The actual number of entries we will have depends on the hardware implementation and this is subject to experimentation. A well behaved application that causes little or no memory fragmentation will need very few memory map entries. So a programmer who makes a performance-critical application should exercise a little discipline to avoid excessive memory fragmentation. |
Reply To This Message |
Paging |
---|
Author: | Date: 2016-09-11 13:40 |
I guess the system is somewhat close to software TLB schemes like on the MIPS. The difference is, of course, that on software TLB schemes, pages must have 2^N sizes and be aligned. The advantage is that you don't need a full comparator per TLB entry and a Content-addressable memory is sufficient, which means a larger TLB is possible - MIPS classically had a 64 entry TLB. Variable page size is not that well established as a technique: large pages don't work that well with copy-on-write techniques, they don't deal well with fragmentation of memory, larger pages tend to be split up by operations like mmap / munmap / mprotect... Generally, memory contiguity is a limited ressource, with contention not only at program-scale but also at global OS-scale, and the general response seems to have been to keep using 4k pages and increase the size of TLB caches. When you're adding multiple page sizes, you're introducing some hard NP-complete problems to the OS's page allocator, since it can't just have a list of available free pages that it distributes to programs willy-nilly, and you have to have all sorts of page-contiguity preserving algorithms. And I'm pretty sure that it's often not practical to allocate memory by enlarging already allocated pages because this only works if the contiguous memory is unallocated, which becomes not only a program-scale problem but an OS-scale problem. Moving data in memory is NOT a good idea IMHO, no matterr how fast you can copy. Especially for applications like games, which can use hundreds of mbs of RAM and CANNOT have 5ms and 10ms stalls. |
Reply To This Message |
Paging |
---|
Author: Kurt Baumgardner | Date: 2016-09-13 00:12 |
I don't think it's possible to both be thrifty with memory, and prevent memory fragmentation, from an application programmer's point of view. You can just allocate a huge heap at the start of your program, and do all your work in that heap, thereby preventing fragmentation. Or you can just use what's currently needed, releasing memory back to the OS, but creating the potential for fragmentation. As a programmer, personally I don't want to have to think about how the OS gives me my memory, and I want the OS to make use of memory I've released. And, in today's OSes, the method for providing memory to programs is a magic black box - you ask, and the OS gives. You really don't have a lot of control of how that occurs. It would be nice to be able to provide hints: "I need a temporary chunk of memory that I'll give back soon", or, "I'll be needing a contiguous chunk of X size for a while." I assume we're talking about a replacement for X86/X64, for servers and desktops. I can symphathize with the goal of optimizing, and wanting to keep everything simple, and that's a noble cause. But, before making it optimal, it has to work first (and by that, I mean it has to be practical, and be able to handle today's apps.) Consider a web browser. Navigate to a text page (or a forum), and that browser may need a couple of Mb of memory. But, then navigate to an image-laden page, and BAM! You now need dozens of Mb to render all those images into. Some browsers delay the rendering of JPGs, for example, until you actually scroll down to where the JPG is visible. And if you keep scrolling, they toss that rendering from memory. My point is that, some programs have very complicated memory allocation needs, and that does not suggest that they are written poorly. Those web browser designers are also optimizing, but they are concerned with memory usage, load time, render speed, and much more. Forcing them to allocate memory a different way limits their abilities. And, without paging, they are forced to be thrifty with memory. Which causes fragmentation. Full circle. If this chip is to sit in people's homes across the world, you've got to expect that many thousands of different programs will be running on them. And each program will be built by people with varying levels of skill. People who will not respect the beauty behind the memory organization scheme. And, you have to cater to them, because if their program brings the system to its knees, your chip and the OS will be blamed. In other words, some crappy app should be able to disrespect the entire memory space...and your chip and OS should handle it with some grace, I believe. And, even trying to be respectful of the machine's resources seems difficult, as I am torn between releasing no-longer-needed memory (which is supposed to be the right thing to do), and trying to prevent fragmentation (which is, essentially, the opposite, unless I've missed something). Back to the subject of dreaming about my ideal CPU: These are not well thought out, or even practical, most likely. But they'd be nice Here goes: 1. 16Gb memory on the same chip as the CPU. I don't know what the limitations are, but, if all your PC's memory could be super-fast, on-chip, wow! Memory wait states slow down the CPU a lot, and, ,if it was all on the same chip, you could eliminate all the complex nightmare caching hardware. 2. If not #1, then some really good cache hint/directive instructions. 3. Instead of relying on branch prediction, why not take both branches, and provide the ability to swap pipelines to use the confirmed branch? This dual pipeline could be used for extra execution when branching was not occurring. 4. A hardware block move/fill/swap, page-based. Runs like a background job. An instruction could be used to test for completion by comparing a given address against each pending job's address range. 5. Instead of saving registers on task switch, use an array of registers, indexed by taskid. I'm sure I could think up some more. As I stated before, I realize that most of these are far-fetched, and do not really fit into Mr. Fog's design. But, maybe one has merit, so I present them for thought. I find the whole project fascinating, and I hope one day that you can get chip to the fabrication stage! Best of luck! |
Reply To This Message |
Paging |
---|
Author: Agner | Date: 2016-09-13 00:39 |
Kurt Baumgardner wrote:And each program will be built by people with varying levels of skill. People who will not respect the beauty behind the memory organization scheme. And, you have to cater to them, because if their program brings the system to its knees, your chip and the OS will be blamed.Some performance critical programs are written by highly skilled programmers who want to tweak as much performance out of the system as possible. Depending on the application, they may be able to predict how much memory they will need and allocate it all at once. Such applications would only need a handful of memory blocks which could easily be handled by an on-chip memory map. Other programs, probably less critical, are made with point-and-click tools and abstract frameworks by people who have no idea what is going on behind the scenes. They may cause heavy memory fragmentation without even knowing it. Maybe we can have a dual system with an on-chip memory map for well-behaved applications and a partially software-based paging system which will be activated only if the memory becomes too fragmented. Programmers who make well-behaved applications will be awarded with superior performance. Back to the subject of dreaming about my ideal CPU: These are not well thought out, or even practical, most likely. But they'd be nice Here goes:These ideas are not farfetched, and some have already been implemented in various systems. Swapping register banks was even supported in some processors back in the 1980s. Putting RAM on the chip is an obvious thing to do for the less memory-hungry applications. The more RAM you put on the chip the slower it will be, so you need one or more levels of cache in between anyway. |
Reply To This Message |
Paging |
---|
Author: | Date: 2016-09-13 19:08 |
Agner wrote:Kurt Baumgardner wrote:Believe me, I like the idea of the super-fast memory map, and the dual system idea is interesting. I appreciate a system that remembers to give the programmer the keys to the kingdom, so to speak. Cause most systems provide a dumbed-down, common-denominator approach. But a dual system? That makes the system that much more complicated, doesn't it.? Maybe OS drivers and OS-protected low-level stuff could use the on-chip stuff exclusively. Otherwise, I'll be pushing to have my Super Zombie Crusher 3 pushing OS stuff out to let my 100-fps renderer live on-chip, right?!! What programmer would choose the slow model? I don't know - the more I think about it, the more I lean towards the traditional model. Yes, it's slow and complex. But, that's because it's necessary, to get the power you want and need. I think Intel got it right, somewhat...it's hard to deny. Simplifying it upfront leads to more complex schemes down the road. To me it seems inevitable. Now, please reconsider on-chip mass memory, which could alleviate this issue and others - see below. Backto the subject of dreaming about my ideal CPU: These are not well thought out, or even practical, most likely. But they'd be nice Here goes:On the subject of massive on-chip memory: It's slow. Ok. But, how slow?. I really want to see the numbers on this one. Maybe it requires a new technology. But, let's assume it's possible. Think about it. Low fixed wait states. 0 cache misses. No complicated cache hardware - this is a big one. Or, that pipeline burst cache hardware instead force feeds the execution pipeline with instructions. Cache misses happen a lot. And, surely it's going to be faster than external memory. Is it expensive, money-wise? Is that why we don't see it being attempted Maybe you could spread the memory across multiple cores. Some algorithms simply cannot avoid reading memory "vertically", such as 90-degree block rotation algortihms. These functions can suffer a cache invalidation on each read, unless proper cache hints are provided...yielding a needlessly slow read/write. To me, cache misses and branch mis-prediction are the 2 things that prevent us from being able to optimize our code accurately, cause we cannot determine how much time each instruction will take, therefore we cannot pair instructions perfectly. Knowing exact memory timing allows the pipeline to be finely-tuned, I'd imagine. Removing the cache hardware provides fast chip real estate to put RAM, and simplifies the design enough to justify research into making it possible. And, your memory map could sit in this same main memory too. |
Reply To This Message |
Paging |
---|
Author: | Date: 2016-09-13 12:12 |
Kurt Baumgardner wrote:http://www.zdnet.com/article/why-doesnt-intel-put-dram-on-the-cpu/ http://electronics.stackexchange.com/questions/175615/why-is-ram-not-put-on-the-cpu-chip Generally, this is done with instructions like PREFETCH on x86. A lot of branch instructions that your CPU pipeline will see are in loops and generally branch the same way a lot of times in a row, and would actually get slower if you ran both sides, because rarely executed paths would be competing for resources. Here's a random C++ example: for(int i=0; i<200; i++) { The branch at the end of the loop (i<200) will be taken 199 times in a row, and only skipped once. Speculatively executing the code after the loop makes no sense in this case. In addition to this, the branch in the middle of the loop (if(mVolumeFade)) will either always be taken or never be taken in the 200 iterations, so in that case speculatively executing both sides of the loop doesn't make sense either, and it's better to trust your branch predictor. Speculatively executing both sides is generally something you'll only see in very large and complex out-of-order CPU cores, and I think it probably involves something like a 3 or 4 way branch predictor that predicts NO-JUMP/MAYBE-JUMP/YES-JUMP or NO-JUMP/MAYBE-NO-JUMP/MAYBE-YES-JUMP/YES-JUMP. I don't think it handles multi-way branch prediction too well either (ie jump to function pointer... some modern cores can handle multiple jump targets like this). Sounds a lot like a DMA controller - and if I'm not mistaken, the x86 DMA controller can be used for that, but I don't think it runs fast enough to get a speed gain. And it kinda competes with the CPU for memory access cycles, which isn't too good, although I guess you could make it operate in the unused cycles. One that's worth it and that is used on ARM is to have 2 stack pointer registers, one for USER mode and one for OS mode, and switch when interrupts happen, because this makes interrupt handlers faster. Fast interrupt handlers are generally nice to have because then it makes stuff like software TLBs realistic, and it speeds up OS calls. Fast task switch between user programs is a bit less important because you're going to have tons of cache misses anyways, so not having to save/restore registers isn't as important (and a lot of programs can run at the same time, which makes it unlikely that you can hold all register sets at the same time). |
Reply To This Message |
Paging |
---|
Author: | Date: 2016-09-14 16:56 |
Thanks, guys - great, informative, interesting answers. I wish I knew more about the architecture. I would like to come up with a way to mitigate the costs of a pipeline flush. It seems like such a shame of a waste when it happens. As good as modern branch prediction is, a lot of that benefit of smart prediction is thrown away. Maybe, while taking both branches, another predictor would realize that this wrong branch was being repeatedly taken, and would instead store an image of the pipeline that could be switched to when the misbranch was actually taken, and the other context could be flushed in the background. I think I would pull my hair out trying to design such a thing. @Agner - I think you're on the right track avoiding microcode, keeping the design simple, and seeing just how well it can be made to perform. Best of luck. |
Reply To This Message |
Paging |
---|
Author: | Date: 2016-09-11 17:54 |
Kurt Baumgardner wrote:By the way, I *really* like the idea of programmer-built extensions to the instruction set. I would love to have the following capabilities:That sounds a lot like a CPU+FPGA combo... That would be a nice thing to have, I'd say the challenge would be that it's hard to share an FPGA between multiple programs. The usual strategy is to save/restore the state on task switch, but that's hard with a FPGA, since there's a lot of setup data, and it's hard to save/restore all the registers and block RAMs. You could try to share parts of the FPGA but that sounds very hard to do... though I guess you could split it in N units with a bunch of gates each, and each application would take up some number of units. Another model would be how video cards are shared. 2a. Microcode programming (if so equipped), orThat's a bit hard if most of the operations on your CPU are already single-microcode, like on a RISC. For a machine with combined memory+arithmetic instructions like x86 or ForwardCom, I guess you could make the combinations of memory-op + arithmetic-op user-configurable, or offer configurable multi-step ops like multiply-accumulate. You could also separate the address calculation and the load/store operation, as long as you don't add new register file writes/reads on the intermediary steps - you could have hidden extra registers that are only visible by sub-operations of your instruction, which wouldn't have to be renamed/loaded/stored to the main register file. 2b. the ability to control load/store, ALU, bit shift, cache control, vectoring, etc hardware.VLIW cpus do something a lot like that. You basically specify which operation each part of the CPU is doing. They haven't really caught on yet because they have trouble dealing with things like memory store instructions needing to have the target address known early (so that program order in loads/stores can be preserved without requiring the compiler to do ultra-aggressive alias analysis) but the data to be written has to be known late (so that the CPU has time to calculate it). 3. An on-chip cache dedicated to user instructions. Could be used for extremely fast lookups/translations, pre-calculated math, etc.Hmmm... that might not be a bad idea, though it would make the issue hardware of the CPU appreciably more complex (so it would make branch mispredictions more costly). It would be useful to implement CISC instructions on a RISC base unit for instance... basically it would make a good cache for implementing the 2a proposition. |
Reply To This Message |
Paging |
---|
Author: | Date: 2016-09-13 15:51 |
Hubert Lamontagne wrote:Kurt Baumgardner wrote:I did not consider multiple programs wanting to use the FPGA - that *is* an ugly issue. Maybe it could be treated like a device, and programs would have to make a reservation, and obtain an exclusive lock for, at least, a portion of the array. I'm not sure there's a good answer there. But, you're onto something: It could be similar to the extra processing offered by GPUs. The reason it works as well as it does for GPUs is that, typically, only one program is using the GPU at any given time (when rendering games, which is, of course, the most visible use). |
Reply To This Message |
Paging |
---|
Author: Agner | Date: 2016-09-14 00:45 |
Kurt Baumgardner wrote:Believe me, I like the idea of the super-fast memory map, and the dual system idea is interesting.[...] What programmer would choose the slow model?The OS will choose for you. If your application is well behaved and avoids memory fragmentation then you will use the on-chip memory map. If you allocate and deallocate huge amounts of memory randomly so that the memory becomes fragmented then the software based memory paging will be activated when the on-chip memory map is filled up. If the user opens and closes a lot of memory hungry applications so that there is no contiguous space to open the next program in then the OS may issue a warning and give the user the option to close some of the applications or run a garbage collector that moves some memory blocks around. The programmers will learn that if they want top performance they should exercise some discipline in memory allocation. I believe that most applications will be able to run without memory fragmentation.
A lot of branch instructions that your CPU pipeline will see are in loops and generally branch the same way a lot of times in a row, and would actually get slower if you ran both sides, because rarely executed paths would be competing for resources. Here's a random C++ exampleThe branch predictor will know if a branch is easily predictable. It could select the branches with the worst prediction rate for executing both sides speculatively. In your example, an optimizing compiler may recognize that the inner branch is loop invariant and move it outside the loop, giving you one loop in each branch. The branch predictor should be able to get perfect prediction for a loop with constant count.
16Gb memory on the same chip as the CPUHubert Lamontagne wrote: http://www.zdnet.com/article/why-doesnt-intel-put-dram-on-the-cpu/This is an obvious thing to do. I believe that we will see a lot of on-chip RAM some day when technology allows it. It may be SRAM. Small system-on-a-chip microcontrollers already have both SRAM and Flash on the chip. ForwardCom may be implemented in this fashion for some dedicated applications.
I *really* like the idea of programmer-built extensions to the instruction setYou may have an FPGA on the chip. An application should be able to monopolize this FPGA, or at least a part of it. I am trying to avoid microcode completely in ForwardCom. |
Reply To This Message |
Paging |
---|
Author: | Date: 2016-09-18 13:55 |
Agner wrote:Kurt Baumgardner wrote:Actually, they've (Intel) already been studying the performance gains and power issues regarding logic+DRAM stacking quite since mid-2000's (see: "Die Stacking (3D) Microarchitecture" here www.cs.cmu.edu/afs/cs/Web/People/chensm/LBA_reading_group/papers/3D_stacking_Black06.pdf ) |
Reply To This Message |
A null register? |
---|
Author: | Date: 2016-09-23 12:55 |
Very interesting project. What if there were a special named register (eg: v0) with a special meaning: This would be used to quickly get a zero value without creating any false dependency. I really don't like the x86 xor hack. It may also be used to discard a result: The main drawback I can see is a register name wasted. It would also make the register renaming a bit more complicated (I think), but not so much afaiu. I would be pleased to know what do you think of this. |
Reply To This Message |
A null register? |
---|
Author: Agner | Date: 2016-09-24 00:25 |
Some instruction sets have a null register. But, as you say, it is a register name wasted.
There is no need for a register with the fixed value zero because a constant can be embedded in the instruction, and this constant can be zero or any other value.
|
Reply To This Message |
A null register? |
---|
Author: | Date: 2016-09-24 09:39 |
The nice thing about a null register is that it lets you remove some opcodes. MIPS does this: - There is no actual MOV instruction. It's actually replaced by "OR $rd, $rx, $rx", and "ADD $rd, $zero, imm", and "LUI $rd, $zero, imm". Generally, whether or not this is a gain tends to depend on register file size: |
Reply To This Message |
A null register? |
---|
Author: | Date: 2016-09-26 03:43 |
Thank you for your hindsight. Is it possible to use any vector instruction with an immediate scalar broadcasted? Discarding the result is not so interesting with ForwardCom as there is no instruction with 2 outputs. |
Reply To This Message |
A null register? |
---|
Author: Agner | Date: 2016-09-27 01:21 |
csdt wrote:Is it possible to use any vector instruction with an immediate scalar broadcasted?Yes. The immediate value can have different sizes, integer or float or a small signed integer converted to float. |
Reply To This Message |
Indexed registers |
---|
Author: | Date: 2016-09-26 22:00 |
Ok, here's an idea I haven't heard of: Set up some registers: ProgramID, ThreadID, and ContextID. These are controlled by the CPU and set up by the OS when spawnning a program or thread. The app programmer only has access to ContextID. The bits of each of these combine to address a memory buffer for the CPU registers. A thread or program task switch simply indexes a different set of registers - no need to preserve registers between task switches. You could even include some instructions for register exchange (EXX) that simply change ContextID. Maybe this mechanism could also supply register renaming functionality...? The actual size of the registers is implementation specific, it just depends on how the address bits are devied up. This supports the variable vector registers. Of course this would have to be a very fast memory. The idea came from the Page 0 stuff you could do on the good ol' 6502, where the first page of memory could be addressed with 1 byte. If the actual registers were separate, and the register page could be copied from/to the real registers, you could use ContextID to fill all registers at once,, for very fast function initialization. The compiler could place instructions at program start that would fill up register contexts for critical functions, and call up that register set when entering each function. There's a lot of possibilities. It all depends on the ability to read from/write to this register page very quickly, and the ability to put enough of this memory on the chip. The amount of memory would limit the number of concurrent programs, threads and contexts, as well as the number and size of registers. Maybe you'd have to consider a program and a thread the same thing, which could reduce the size of this memory somewhat. If done right, I think this could really speed up task switches and function initialization and save stack space. Another idea could be a partial register exchange. ProgramID and ThreadID (or just ThreadID) point you to the proper register bank, but a special command could cause, say R0 thru R15 to always read from ContextID 0, but R16 thru R31 would read from the Context set by this special bank switch command. For a 2-bit ContextID, that provides you another 64 registers to use, 1 bank of 16 at a time. So R0 thru R15 stay the same, but R16 thru R31 are pulled from 1 of 4 banks. Or you do the full context switch, for 32 out 128. I'm not sure how this would affect other processes, but on the surface, it sounds powerful. Please note that I am describing 2 opposing possible setups: I see benefit in both setups. Unfortunately, I think either setup might have performance problems, unless some very specialized burst read/write hardware is built. But it could be a joy to program. Imagine placing all your immediates and constants into a register context once at the start of your program. Then, when you call your function, a single command sets up all your constants, your loop counts, and your buffer pointers, without using the stack, or even setting registers. Compilers would have to be a bit different, but we're trying for a better design, right? Maybe a routine could "push it's context" to a function to achieve by-reference registers. Or copy it's context into another for by-value, without using the stack, or explicitly setting registers. Is it a crazy idea? It is complex, but could be very powerful. I wonder if it is practical. Ok, here's my last idea: If the above cannot be done, how about an instruction that pushes registers onto the stack, based on a mask. The push-by-mask opcode is followed by an immediate 32-bit value, with 1 bit dedicated to each register. Registers are always pushed in the same order. To push r0 thru r7, you'd use an immediate value of 011111111b. Or, to push r2, you'd use 0100b. I guess this would be slow, but it would be compact, and useful. Pop would, of course reverse, so the same mask could be used. |
Reply To This Message |
Indexed registers |
---|
Author: Agner | Date: 2016-09-27 01:46 |
My proposal has some features for speeding up context switches, but not quite as radical as you are proposing. The memory map is so small that it can be contained on the chip. There are separate memory maps for system code. In benign cases where a process uses only a small part of the total memory map area on the chip, you may swap memory maps when switching processes. All code is addressed relative to the instruction pointer and all read/write memory is addressed relative to the data pointer and the stack pointer. This means that everything is addressed relative to just three registers which can quickly be changed. There are separate registers that are only accessible to system code. These can be useful in a device driver. The device driver needs access to the registers of the application program because these registers (actually, only half of them) can be used for parameter transfer to the device driver. The extra registers can be used by the device driver for temporary storage of the application's registers. I interpret your Setup # 1 as something like register banks that can be swapped. This may be added as an optional feature on ForwardCom, but I don't know if the costs in terms of central silicon space can justify it. Your setup # 2 is reading and writing all registers to some memory area or cache. I have avoided an instruction for "read all registers" or "write all registers" because I want to avoid complex micro-coded instructions and because these instructions are very slow on x86 (I don't know about other architectures). Maybe a special on-chip cache for the sole purpose of saving registers on context switch would be useful. |
Reply To This Message |
Indexed registers |
---|
Author: | Date: 2016-09-27 21:49 |
Makes sense. Yeah, the only way setup #2 would be worth it is if the registers could all be stored in a single cycle, like a latch, which would require an unconventional memory with a very wide data bus. Probably not practical. My goal has been to present new ideas, and let you smarter guys consider the possibilities and practicality. If this ability could be built, it would radically change the way functions are built, and executed, and it has support for your variable vector widths, which made it seem like a nice fit. I just imagined the possibility of having nearly zero cost functions, with all constants and immediate values already set and ready to go, without any stack usage, or register setup. It would be something like inlining your whole program. Oh well :) Your current setup seems like a nice compromise. |
Reply To This Message |
Indexed registers |
---|
Author: Agner | Date: 2016-09-28 00:53 |
Kurt Baumgardner wrote:I just imagined the possibility of having nearly zero cost functions, with all constants and immediate values already set and ready to go, without any stack usage, or register setup.If you talk about function calls, and not context switches, then I can almost fulfill your wish. ForwardCom supports a register allocation mechanism where the compiler can see which registers are modified by e.g. a library function. It may use the same registers for temporary variables that do not have to be saved across the function call, and use different registers for variables that need to be saved across the function call. With 32 registers of each type, we will have no register spilling in the critical inner nesting levels. If the program has deeply nested function calls then the critical innermost loops are likely to be in the deeper functions while the outermost nesting levels are unlikely to matter in terms of performance. Return addresses can have their own on-chip stack separate from the data stack so that you have near zero cost function calls. |
Reply To This Message |
Indexed registers |
---|
Author: | Date: 2016-09-28 02:34 |
* For putting all registers in a cache-like memory block: This would require a very large register file. For instance, the integer register file on some recent Intel cores has 168 registers. For ForwardCom's 32 registers, you can fit 5 whole contexts in there. But you have to reserve some registers for renamed register values that haven't been retired yet for out-of-order execution to operate on, so you need more than that. For running a modern OS, you'd probably need dozens and dozens of contexts, which would require something as large as a data cache. Ex: 8 bytes per register * 32 registers * 64 contexts = 16kb - that's the size of the whole L1 data cache on some modern CPUs! More reasonable versions of this already exist though - I think x86 already does this for HyperThreading, with 2 different contexts at the same time. Another case where it's really useful to have just 2 bankswitched sets of registers is context switching between the user program and the OS - for handling page fault exceptions and memory allocations and software TLBs and software emulated instructions and so forth. Often, this involves only bankswitching only a few registers, such as user SP versus system SP (some ARM systems). * For saving/restoring registers in a cache-like memory block with separate instructions: You also still has the problem that your cache-like memory block has a finite size, which means that sooner or later the OS will run out and will start to read/write parts of it to main ram anyways - which is what this mechanism was trying to prevent in first place! In theory you could map this special memory to be backed by RAM which solves size limits, but then all operations that read or write to your special memory become potential RAM operations and have to be issued potentially in-order with other memory operations and compete for data cache access ports, and can potentially trigger interrupts which means that either the CPU has to run those operations in order (with stalls when waiting), or all those operations have to run speculatively and be undoable (which means that the previous state has to be preserved until your operation can be confirmed as having run for real). Even without RAM mapping, you'd probably still need a write buffer. * For having load multiple/store multiple instructions |
Reply To This Message |
Indexed registers |
---|
Author: | Date: 2016-10-03 21:34 |
Thank you for the detailed answers Agner and Hubert. Hubert Lamontagne wrote: * For putting all registers in a cache-like memory block:Well, that's a shame. I've been looking at some 80x86 disassembly of a C .exe recently, that contained a lot of function calls. Most of them do a bunch of PUSHes, read some stack vars, followed by the function's actual purpose, and end with the POPs, and finally the return. And, of course that's in the majority of function calls: this constant pattern of register save, fill, use, reload, followed by return (another stack pop). Seems like there should be a better way. Maybe instead of a set for every program/thread, you could have just enough space for a few levels deep of registers. This would correspond with the "call stack". Since most calls come with PUSHes, and most returns are proceeded by POPs, it seems like a prime candidate for optimization, for both speed and code size. Maybe that's still too complex or slow for hardware, though. Agner wrote: If you talk about function calls, and not context switches, then I can almost fulfill your wish. ForwardCom supports a register allocation mechanism where the compiler can see which registers are modified by e.g. a library function. It may use the same registers for temporary variables that do not have to be saved across the function call, and use different registers for variables that need to be saved across the function call. With 32 registers of each type, we will have no register spilling in the critical inner nesting levels. If the program has deeply nested function calls then the critical innermost loops are likely to be in the deeper functions while the outermost nesting levels are unlikely to matter in terms of performance. Return addresses can have their own on-chip stack separate from the data stack so that you have near zero cost function calls.Yes, I can see how this *could* reduce most of that PUSH/POP stuff in function calls. But, can the compiler be smart enough to follow the code flow through those nested calls? (I apologize in advance if I've missed something about ForwardCom that simplifies this task). Most compilers I've experienced simply preserve a function's used registers, in the function - in each function, because the compiler does not anticipate the code flow. And, of course, in many cases, it simply cannot: CALL r15, or even CALL (r15) for example. |
Reply To This Message |
Indexed registers |
---|
Author: Agner | Date: 2016-10-03 23:54 |
Kurt Baumgardner wrote:Yes, I can see how this *could* reduce most of that PUSH/POP stuff in function calls. But, can the compiler be smart enough to follow the code flow through those nested calls? (I apologize in advance if I've missed something about ForwardCom that simplifies this task). Most compilers I've experienced simply preserve a function's used registers, in the function - in each function, because the compiler does not anticipate the code flow. And, of course, in many cases, it simply cannot: CALL r15, or even CALL (r15) for example.The compiler has to obey the function calling convention if the function is publicly visible. The calling convention for 32-bit x86 generates a lot of push and pop. The most efficient calling convention is in 64-bit x86 on Linux. Gnu, Clang and Intel C++ compilers can be quite efficient when the relevant optimization options are on. See my calling convention manual at www.agner.org/optimize/#manuals |
Reply To This Message |
Indexed registers |
---|
Author: | Date: 2016-10-04 10:57 |
Kurt Baumgardner wrote:[...]Function calls and their associated push/pops are very common, but a large majority of them happen on cold paths: - Anything that happens on a cold path rarely makes any noticeable speed difference... Even moderately warm paths tend to count a lot less than the hottest paths in overall speed. - The hottest paths often are loops and tend to either not have any function calls, or have calls that are only occasionally called (error conditions etc), or have calls that can be inlined - which means the compiler can automatically allocate registers and even reorder operations within the parent function, as long as the function is small enough for this to be beneficial. - Hot loops that do have function calls are often very large and complex loops that are likely to stall for all sorts of other reasons, such as not fitting in the instruction cache, having data cache misses, having pointer-following chains that stall due to data cache latency, having hard-to-predict branches, loading/storing lots of values from objects and so forth, so the function call overhead might not stand out in the wash, or if you're lucky, it could happen while the CPU is waiting after something else (which makes it free). One option would be register windows like on SPARC, which acts as a hardware assisted stack. Here's an article about that: Another option would be load-dual and store-dual instructions, which let you combine two 64-bit register memory operations into a single 128-bit store (as long as your stack is 128 bit-aligned) and is probably as fast possible for saving/restoring a bunch of registers at the same time (especially if you can dual-issue it to a 2-port data cache). |
Reply To This Message |
Bilinear Interpolation |
---|
Author: | Date: 2016-10-28 17:38 |
Bonus question: How well does ForwardCom handle bilinear interpolation? It's the one algo that I can think of that just doesn't fit well in most CPUs, wastes a bunch of cycles twiddling bits around and generating memory addresses and interpolation factors and applying them across RGBA channels (multiplied by the number of texture layers used), so that GPUs have a big advantage doing this one particular thing. It can also show up in other tasks, such as wavetable synthesis (interpolation on the time axis of the waveform + interpolation between waveforms). On most CPUs like x86 or ARM, you can use vectors to do the interpolation part but you still have to do lots of integer scalar ops to generate the memory addresses. And if the U V coordinates are generated per pixel and can't be linearly interpolated, then having to send the interpolation factors over to the vector unit and scramble them into place can easily wipe out potential gains. Is there a way to do a particularly fast version of this on ForwardCom? |
Reply To This Message |
Bilinear Interpolation |
---|
Author: Agner | Date: 2016-10-29 00:48 |
Hubert Lamontagne wrote:How well does ForwardCom handle bilinear interpolation?It depends on how your data are organized into vectors/arrays. It requires a good deal of permutation. ForwardCom has good permutation instructions, but so does other vector instruction sets. |
Reply To This Message |
Bilinear Interpolation |
---|
Author: | Date: 2016-10-29 15:31 |
Agner wrote:Hubert Lamontagne wrote:Ok, how about the most classic texture organization: single texture, 256x256, 32bpp RGBA (so that's 8 bits per component, can be either RGBA or ABGR order), no swizzle. Can that be bilinear-interpolated efficiently? |
Reply To This Message |
Bilinear Interpolation |
---|
Author: Agner | Date: 2016-10-30 00:42 |
I don't think I understand your problem. You have four RGBA points in each their vector register. All of these should be multiplied by a factor and then it should all be added together. It's just multiplication and addition of 8-bit integers. You may zero-extend all to 16 bits to avoid loss of precision; then shift right and compress back to 8 bits. |
Reply To This Message |
Bilinear Interpolation |
---|
Author: | Date: 2016-10-30 21:47 |
Agner wrote:I don't think I understand your problem. You have four RGBA points in each their vector register. All of these should be multiplied by a factor and then it should all be added together. It's just multiplication and addition of 8-bit integers. You may zero-extend all to 16 bits to avoid loss of precision; then shift right and compress back to 8 bits.In pseudo-ASM it looks somewhat like this: loop: ; generate memory addresses ; load the 4 texels ; expand components from 8bits to 16bits to avoid wrap-around problems ; generate interpolation factors ; linear interpolate ; alpha blend ; increment and loop
|
Reply To This Message |
ForwardCom version 1.04 |
---|
Author: Agner | Date: 2016-12-08 03:44 |
Version 1.04 update today:
|
Reply To This Message |
ForwardCom version 1.04 |
---|
Author: | Date: 2016-12-12 09:42 |
As the compare_jump_* instructions cover basically all conditional control-flow cases, I guess that the compare instruction will be used mostly to compute mask registers. So I wonder if it would be more economical to restrict RD and RS to 3 bits and so recover 2x2 bits to store the condition code in the instruction itself. Reading the condition code from a register may have some novel use cases, but I think that in 99% of all code (especially in compiled code) the condition code is fixed. |
Reply To This Message |
ForwardCom version 1.04 |
---|
Author: Agner | Date: 2016-12-12 11:46 |
Matthias Bentrup wrote:I wonder if it would be more economical to restrict RD and RS to 3 bits and so recover 2x2 bits to store the condition code in the instruction itself.You are right that it would be more compact, but it would break the uniform template system. This would make the hardware more complex and it would also generate an extra complication for the compiler. The present specification allows you to make a compare with fixed condition in a single-word instruction if the destination register is the same as one of the source registers. |
Reply To This Message |
ForwardCom version 1.04 |
---|
Author: | Date: 2016-12-14 03:44 |
Agner wrote:[...] The present specification allows you to make a compare with fixed condition in a single-word instruction if the destination register is the same as one of the source registers.But unless I am mistaken you need two words if you want to compare a register to an immediate value. There a trade-offs in both directions and I don't think that either way is clearly better than the other one. |
Reply To This Message |
Async system calls; horizontal packing instruction |
---|
Author: | Date: 2016-12-14 06:13 |
Agner wrote:Version 1.04 update today: Hi Agner. I've been thinking about some interesting research I've read recently, and it raises a couple of questions for me: 1. Does ForwardCom assume synchronous system calls? Could it support a flexible async syscall architecture? I'm not sure where the boundary between an IAS and syscall architecture should lie, but it seems like it could lie wherever you want it to lie – you can choose to fold syscall specifications into an ISA, or to bracket it off as an OS concern. I ask because this work on async sys calls is very compelling: https://www.usenix.org/legacy/events/osdi10/tech/full_papers/Soares.pdf 2. What do you think of the family of horizontal packing operations that the Parabix team pines for? parabix.costar.sfu.ca/wiki/ParabixTransform (See their section headed "Ideal Implementation Using SIMD Packing" on the above-linked page.) Would it be wise for ForwardCom to offer such instructions? Why or why not? |
Reply To This Message |
Async system calls; horizontal packing instruction |
---|
Author: Agner | Date: 2016-12-15 00:15 |
Thank you for your interesting ideas.
1. Does ForwardCom assume synchronous system calls? Could it support a flexible async syscall architecture? I'm not sure where the boundary between an IAS and syscall architecture should lie, but it seems like it could lie wherever you want it to lie – you can choose to fold syscall specifications into an ISA, or to bracket it off as an OS concern.System calls in ForwardCom are synchronous because they change access rights and memory map. The switch time will be low because both memory maps can be contained inside the CPU. The "FlexSC" batch processing of system calls proposed in the article you are linking to can be implemented in ForwardCom in the following way: First, you make one normal system call which will share a memory area between application and system. After that, you can use this shared memory area to communicate between the application thread and the system thread. This requies that the application thread and the system thread are running in each their core and that these two cores have easy access to the same cache. There is still a problem of synchronization between the two threads. This would probably require speculative synchronization in order to be more efficient than standard system calls. The system thread might be idle most of the time, and this is a waste of resources.
What do you think of the family of horizontal packing operations that the Parabix team pines for? parabix.costar.sfu.ca/wiki/ParabixTransformThat sounds like a good idea. We may implement an instruction that shuffles the bits in each 64-bit block (b0,b1,b2,...,b63) of a vector register into the order (b0,b8,b16,...,b56,b1,b9,b17,...,b57,b2,b10,b18,...,b58,...,...,b7,b15,b23,...,b63). This can be followed by a permute of the bytes into the eight streams. Do you have any good use cases for the parallel bit streams?
Finally, it would be nice to have a detailed comparison of ForwardCom and RISC-V, much of it in the form of comparison tables that get into specific features and decisions. I volunteer to help assemble such a comparison document if you're too busy.Good idea. We could make a comparison overview and put in on www.forwardcom.info or github.com/forwardcom. It may also include comparisons with x86 and ARM, but it would be too much to include all common instruction sets. I can make a draft and let you comment on it and make additions. |
Reply To This Message |
Comparison of instruction sets |
---|
Author: Agner | Date: 2016-12-17 03:39 |
Joe Duarte wrote:it would be nice to have a detailed comparison of ForwardCom and RISC-V, much of it in the form of comparison tables that get into specific features and decisions. I volunteer to help assemble such a comparison document if you're too busy.Now I have made a comparison of instruction sets at www.forwardcom.info/comparison.html Please correct any errors and important omissions. |
Reply To This Message |
Comparison of instruction sets |
---|
Author: | Date: 2016-12-28 21:01 |
Agner wrote:Joe Duarte wrote: Looks good. Notes:
|
Reply To This Message |
Comparison of instruction sets |
---|
Author: Agner | Date: 2016-12-29 00:56 |
Joe Duarte wrote:Why 32 registers? I'm distrustful of register counts like 32, 16, etc. – the Suspiciously Convenient Numerals of Computing. I understand why register width and data types need to be 32 or 64 bits, but we shouldn't expect the optimal number of registers to be 32 any more than 31, 29, or 43. The optimal number should be empirically determined.The number of registers is a power of 2 because n bits in the instruction code gives 2n possible combinations. If we had 43 registers then we would need 6 bits in each register field and we would waste 21 unused combinations. We don't want unused combinations because code density is important. Some combinations could have special meanings at the cost of a more complicated hardware. ForwardCom has a few special combinations. 31 means the stack pointer in some contexts, and "no register" in other contexts. 29 means data section pointer and 30 means instruction pointer in some addressing modes.
What about virtualization support? Does ForwardCom need it?Good point. I have thought about support for multiple instruction sets. See chapter 10 in the forthcoming version 1.05. A microprocessor with support for multiple instruction sets would ease the transition to a new instruction set. Virtualization would be useful here to produce one virtual machine for each instruction set, at the cost of a little extra hardware. |
Reply To This Message |
Comparison of instruction sets |
---|
Author: Hubert Lamontagne | Date: 2016-12-30 19:51 |
Joe Duarte wrote:2. Why 32 registers? I'm distrustful of register counts like 32, 16, etc. – the Suspiciously Convenient Numerals of Computing. I understand why register width and data types need to be 32 or 64 bits, but we shouldn't expect the optimal number of registers to be 32 any more than 31, 29, or 43. The optimal number should be empirically determined.Register IDs have to be decoded fast... it's one of the things that determines the number of stages you need between the instruction cache stage and the register rename stage (fewer stages = less cycles penalty on branch misprediction). The fastest way to encode register IDs in instructions is simply to have N bits that are always at the same position in the instruction, which is how you end up with 8, 16 or 32 etc logical registers. The number of physical registers after rename can vary a lot more though - for instance, Sandy Bridge has 160 integer and 144 floating point physical registers after renaming, and this is a carefully optimized number so that the register file size has the right balance of propagation delay through the chip vs instruction reordering capacity. |
Reply To This Message |
Comparison of instruction sets |
---|
Author: | Date: 2017-01-05 16:47 |
RISC-V is starting to reach silicon. Performance is looking pretty good, for a microcontroller. (comparable to the ARM in a Teensy) |
Reply To This Message |
ForwardCom version 1.05 |
---|
Author: Agner | Date: 2017-01-22 04:39 |
Changes in version 1.05:
|
Reply To This Message |
Syscall/ISR acceleration |
---|
Author: Jonathan Brandmeyer | Date: 2017-01-22 23:22 |
There are a few features of ARM Cortex-M (the microcontroller profile) that I miss when writing system-level programs in other architectures, including Cortex-A/R: - Lack of a control register space. In Cortex-M, the processor's control registers are all placed in a per-core region of memory at a reserved physical address. Even the memory protection unit's control registers are saved up there. You don't have to muck around with coprocessor control instructions to manage the caches, memory protection, etc: Its all as simple as addressing into the members of a struct based at some fixed address. I believe, but cannot prove, that this makes visualization easier, since the next level down in the supervisor hierarchy can just map the mocked virtual peripheral to the reserved hardware address. - Automatic stacking of registers. A major design change in Cortex-M versus ARM7** was that instead of having banked registers for handler mode (a system mode equivalent), the core automatically saved the call-clobbered registers to the stack of the thread mode (user mode equiv) stack pointer. It also saves the link register to a value which points into the system reserved memory space that has the meaning of "return from interrupt". Obvious advantage #1: interrupt handlers can be written in plain ordinary C. Slightly less obvious advantage #2 is that if the processor proceeds to execute another interrupt immediately after the first (say, because the first triggered an OS scheduling event, for example), the user mode stack pointer doesn't need to be saved. Similarly, when performing a context switch between different user-mode threads, only the OS scheduling interrupt must save the rest of the user-mode's thread to the their stack frame. Obvious disadvantage is that the ABI is partially baked into the hardware with respect to the call-clobbered registers and stack pointer. On a SkyLake-class OoO machine, I suspect that you can execute loads to call-clobbered registers in parallel with the writes of those registers, too, as the ISR goes through the dispatch process. - Interrupt vectoring. Each peripheral interrupt source gets its own handler instruction assigned to it. This makes interrupt dispatch to the particular driver especially easy. I think this can be taken one step further, where the handler can get an argument or few passed to it. This makes the signature of the interrupt handler something similar to that of a closure in c: "void specific_isr(void *os_provided_data, int peripheral_provided_data)" So long as the function address + arguments are less than the size of cache line, I think it shouldn't cost too much even on a big OoO machine. The interrupt controller can be 'far' away from the core it is dispatching to, passing a message the size of a cache line or less to trigger the interrupt. Now we are also baking in a few argument-passing registers, too. My professional experience is mostly on microcontroller-class hardware, and having interrupt dispatch as fast as a function call is rather pampering. It seems from the outside that the generally poor performance of interrupt handling on larger processors has lead to a series of design attempts to avoid it, some going as far as polling (!) the device instead of taking an interrupt at the time of demand. My general assumption has been that the cost mostly comes from blowing the dcache and icache for even trivial interrupt responses just due to the branchy/pointer-chasey dispatch logic itself. Just having the peripheral interrupt controller perform the vectoring for the operating system would drastically reduce the data-cache pressure incurred by forcing the ISR to perform all of the pointer-chasing dispatch logic on its own, if that underlying assumption is true.
|
Reply To This Message |
Syscall/ISR acceleration |
---|
Author: Agner | Date: 2017-01-23 00:05 |
Jonathan Brandmeyer wrote:There are a few features of ARM Cortex-M (the microcontroller profile) that I miss when writing system-level programs in other architecturesThanks for the input. The system instructions of ForwardCom are not developed yet. The idea at this point is that there are two extra sets of scratch registers, one for device drivers and one for system core. A device driver can simply save the user mode registers to its scratch registers rather than to the stack in order to avoid polluting the data cache. I would prefer to save the registers one by one, as needed, to avoid complex micro-coded instructions. Data stack and call stack are preferably separate. The call stack can stay on-chip most of the time. There may be a separate call stack for device drivers and system code. We can keep the system call stack on-chip by banning recursive function calls in device drivers. This would make it possible to make a device driver that doesn't even touch the data cache unless it needs to do a lot of data processing. The device driver has access to a limited block of the application program's data space which it is likely to have good caching. I don't know how an interrupt handler gets its memory. Maybe the application program can call a device driver to enable the interrupt handler and assign some of the application's memory space to it. I have not decided how interrupts are dispatched, but we may have the interrupt vector table stored on-chip. |
Reply To This Message |
Syscall/ISR acceleration |
---|
Author: Jonathan Brandmeyer | Date: 2017-01-25 07:50 |
Agner wrote:The idea at this point is that there are two extra sets of scratch registers, one for device drivers and one for system core.One of the gotchas from ARM banked registers is that the FIQ register state was retained across ISR return and reentry. In effect, the core has additional named architectural registers that are almost never actually accessible at any given time, yet still consume space in the physical register file. If the IRQ and System scratch registers are instead defined to be zeroed on IRQ or syscall entry, then the underlying physical registers can be freed by the rename engine on exception return, and reallocated automatically on first use after exception entry. |
Reply To This Message |
Syscall/ISR acceleration |
---|
Author: Agner | Date: 2017-01-25 13:46 |
Jonathan Brandmeyer wrote:One of the gotchas from ARM banked registers is that the FIQ register state was retained across ISR return and reentry.We probably have to clear the registers for security reasons. |
Reply To This Message |
ForwardCom version 1.05 |
---|
Author: | Date: 2017-01-23 11:58 |
Hi. In table 3.16 on page 24 is probably error. In D template is only 3bit length field OP1, so OP can't be 0-15, but just only 0-7 in "1.5 D". Same problem is in table 3.17. Unconditional jump/call has OPJ=0-7 and 8-15 in 3bit field. ??? And another question: How do you discriminate between jump instructions in format 1.5C and 1.5D format? |
Reply To This Message |
ForwardCom version 1.05 |
---|
Author: Agner | Date: 2017-01-24 00:36 |
Jiřà Moravec wrote:Unconditional jump/call has OPJ=0-7 and 8-15 in 3bit field. ???The lower 3 bits of the 6-bit OP1 field are occupied by part of the 24-bit address in the 1.5 D format. The 24-bit jump has part of the address in the lower 3 bits of OP1 and 000 in the upper 3 bits. The 6-bit OP1 field will then be 0-7. Likewise, the 24-bit call has part of the address in the lower 3 bits of OP1 and 001 in the upper 3 bits. The 6-bit OP1 field will then be 8-15. Other instructions cannot have format 1.5 if OP1 is less than 16. The instructions that are missing for this reason are subtract constant and jump conditionally. These instructions can be replaced by the corresponding add constant and jump conditionally by using the negative constant. So the rule is that format 1.5 D is used when OP1 < 16 and format 1.5 C is used otherwise. I organized the puzzle in this way in order to squeeze jumps and calls with 24-bit relative address into the template system. The 24-bit offset is scaled by 4 so that we can cover an address range of +/- 32 Megabytes which is sufficient for most most programs. Unconditional jumps and calls are so frequent that it is nice to have them in single-size instructions. |
Reply To This Message |
Jump prefetch? |
---|
Author: csdt | Date: 2017-01-27 10:21 |
Branch predictors is currently a hard part of a CPU. I propose to help it a bit in the same way we can help the cache: tell it in advance where we want to jump. I think this can be done by a couple of instructions: This can help basic control flow if the program is not limited by the instruction bandwidth with the second instruction. For safety reason, this "special register" should be saved and restored during context switches. What do you think of this ? |
Reply To This Message |
Jump prefetch? |
---|
Author: Agner | Date: 2017-01-27 11:45 |
csdt wrote:Branch predictors is currently a hard part of a CPU.The hard part of branch prediction is not to predict the target address, but to predict whether it will jump or not. The target address can be calculated at an early stage in the pipeline for direct conditional and unconditional jumps. Certain instruction formats are reserved for jump instructions so that these instructions can be identified easily. The earlier in the pipeline we can calculate the target address, the less is the penalty for not knowing the address of the next instruction, and the less is the need for large branch target buffers (BTB). It is much harder to predict whether it will jump or not. A good predictor contains large pattern tables that consume a lot of silicon space. We can reduce the branch misprediction penalty by having a short pipeline and perhaps by fetching and decoding both targets of a two-way branch. Multiway branches are even harder to predict, but we cannot do without them. Function returns are easy to predict. If we have separate data stack and call stack, then the call stack will usually be so small that it can reside on the chip. |
Reply To This Message |
Jump prefetch? |
---|
Author: csdt | Date: 2017-01-30 05:06 |
The point of this solution is to replace conditional jumps by unconditional jumps to predefined targets. It is always possible to consider a conditional jump like an unconditional jump where the target is either the one we want, or the instruction just after the jump depending on the condition. Doing this, we can use unconditional jumps even with branches and loops, with a predefined address (set with a previous instruction). |
Reply To This Message |
Jump prefetch? |
---|
Author: Agner | Date: 2017-01-30 07:14 |
csdt wrote:The point of this solution is to replace conditional jumps by unconditional jumps to predefined targets.It is not very clear what you mean here. It is possible, of course, to put the jump target address in a register and then do an indirect jump to the register value. This doesn't solve the problem in current hardware designs because the value of the target register is not known at the time the next instruction has to be fetched into the pipeline. You would need to store the target address several clock cycles in advance and then have a mechanism for signalling to the instruction fetcher at what point it would have to fetch from the new address. The compiler then has to place the switch-target instruction at the right position in the instruction sequence. The distance from the switch-target instruction to the point of divergence has to fit the length of the pipeline, which depends on the specific hardware implementation. This may be possible, but I am not aware of any attempt to construct such a machinery. It is always possible to consider a conditional jump like an unconditional jump where the target is either the one we want, or the instruction just after the jump depending on the condition.Some instruction sets have an instruction for conditionally skipping the next instruction. If the instruction that you skip is not a jump instruction, then this is in effect a conditional execution of the instruction. ForwardCom can execute instructions conditionally by using a predicate or mask. This avoids misprediction, but it doesn't save time because the time it takes to not-execute an instruction is the same as the time it takes to execute it. The reason for this is that the value of the predicate register is not known at the time the instruction is scheduled. It might be possible to construct a scheduler that checks whether the value of the predicate register is known sufficiently early to skip the instruction completely and not schedule it for execution. This would require a quite complicated hardware design, and I don't think it has ever been done. If you have a system that conditionally skips one instruction, and the instruction you skip is a jump instruction, then you have a situation that is equivalent to a conditional jump. The instruction fetcher will not know in advance which instruction comes next. This all boils down to the issue of pipelining. Current high-end microprocessors have a pipeline of typically 10-20 stages. This means that it has to know 10-20 clock cycles in advance which instruction to fecth next. If this information is not actually known 10-20 clock cycles in advance then you cannot avoid a waste of time if you make the wrong guess, no matter how you implement the branching. Any mechanism that makes this information known to the instruction fetcher sufficiently early would be quite complicated to implement both in the hardware and in the compiler. |
Reply To This Message |
Jump prefetch? |
---|
Author: csdt | Date: 2017-01-30 09:53 |
Agner wrote:csdt wrote:The idea is not to help the prediction, but to "override" the prediction for a single given jump instruction. So there is no failed prediction in the same way as you cannot "mispredict" an unconditional jump to a fixed target. Let me give you a few examples (pseudo-asm syntax). The 3 instructions are: Here is a simple loop: (prolog) So basically, the jump is always taken, and you know exactly the address of the jump in advance. The same applies to if conditions: %01 = (compute the condition) Of course, if you set the target too early, you need to wait, so you need to execute the "set_jmp" early enough, but the same problem arises with data prefetching anyway. |
Reply To This Message |
Jump prefetch? |
---|
Author: Agner | Date: 2017-01-31 01:40 |
csdt wrote:The 3 instructions are:OK. Now I think I understand you. Your idea is somewhat like an extension to a "delay slot". Some old RISC architectures (MIPS, SPARC, OpenRisc) have a delay slot of one or two instructions after a jump. The instructions in the delay slot are executed before the jump is executed. The main problem with these delay slots is that they are optimized for a particular hardware implementation. A new implementation with a longer pipeline and more parallelism will need a longer delay slot. This makes me wonder if it would be useful to have a variable-length delay slot. A branch instruction could have an 8-bit field indicating a delay slot of 0-255 instruction words. The compiler will place the delayed branch instruction as early in the code as it can. The instruction fetcher in the CPU will then know in advance what to fetch after the branching point. If the fetcher has already fetched past the branching point at the time the delayed branch is executed then it has to discard the extra instructions and fetch again. This will give a bubble in the pipeline, but this bubble will be smaller the longer the delay slot is. There are known problems with delay slots: Interrupts, debug breakpoints, and jumps in the delay slot lead to extra complications. We might overcome these problems by making the delayed jump instruction a "prediction hint" to be confirmed later by a regular branch instruction at the actual branching point. Only in the rare case that the prediction hint is lost or compromized due to interrupts or whatever will we have a misprediction at the branch point. The delay slot has to be quite long. Modern microprocessors can execute 3 or 4 instructions per clock cycle and have a pipeline of 10-20 stages. With 3 instructions per clock on average and a 10-stage pipeline we would need 30 instructions to fill the delay slot. If no other jump instructions are allowed in the delay slot then we would rarely be able to make a sufficiently long delay slot. Now, back to your proposal. I understand that you want to have a separate register for jump targets only. We would have to set this jump target register some 30 instructions in advance in order to keep the pipeline full. I think it is unlikely that we have no other branch instructions in this 30 instruction space, so we need more than one jump target register. If we need many jump target registers, then we might as well use the general purpose registers for jump targets instead. So, now your set-jump-target instruction just becomes ordinary instructions that calculate a jump target and put it into a register. And your jump-to-target instruction becomes a register-indirect jump. Now the problem becomes how to make the register value available to the branch prediction front end of a superscalar processor. The instructions that set the jump target are executed in the out-of-order back end, while the branch predictor operates in the in-order front end. This makes me think, is it possible to execute simple instructions already in the in-order front end of a superscalar processor? If we can execute the instructions that calculate the jump target already in the in-order front end, then we have reduced the necessary delay to a few clock cycles. I am imagining a dual execution system: We have a system of in-order ALUs and out-of-order ALUs. Simple instructions are executed in the in-order ALUs if the operands are available already while the instruction is decoded in the in-order front ind. If the operands are not available yet, then the instruction is passed on to the out-of-order back end. Here, we need a system to track all the general purpose registers to see if they are ready. We can have a "register cache" in the in-order front end. Every time the decoder sees an instruction that writes to a register, it will mark the entry for this register as dirty and remember which instruction it was. When this instruction is later executed (in order or out of order) the value is written to the register cache and marked as clean (unless another instruction writing to the same register has been encountered in the meantime). Simple instructions can execute in the in-order system if all their input operands are ready. The result is stored both in the register cache and passed on to the register renamer/allocater. Branch instructions, jumps, calls, etc. should execute in the in-order front end. The compiler should make sure that any registers that the branch depends on are calculated as early as possible, and preferably using simple instructions that can execute in the in-order front end. If a branch depends on a memory operand, such as a jump table or table of function pointers, then the compiler should load the memory operand into a register as early as possible and then make a register-indirect jump. A simple loop with a loop counter could benefit from this. The increment and compare instructions for the loop counter will be simple instructions that are likely to execute in the in-order front end so that the values are available early. Multiway branches are particularly difficult to predict, and they could benefit from a system that calculates the target address in advance. For example, a byte code interpreter typically loads one byte at a time and uses this code as index into a multiway jump table. The software can easily read the next byte code in advance, fetch the target from the jump table and store it in a register. This is an interesting discussion. The software may be able to calculate branch targets early in many cases, but the intricacies of out-of-order execution makes it complicated to forward this information to the in-order front end. Whatever solution we may come up with might be complicated, but still use less silicon space then the complicated branch prediction mechanisms that are used in modern microprocessors today. |
Reply To This Message |
Jump prefetch? |
---|
Author: csdt | Date: 2017-01-31 04:31 |
We now understand each other ;) The solution I propose shares some features with a delay slot, but solves the main issue: the delay slot is implementation-dependent. Agner wrote: There are known problems with delay slots: Interrupts, debug breakpoints, and jumps in the delay slot lead to extra complications.For interruptions, the target must be stored and loaded back at the end of the interruption. (You will probably lose the benefit of setting target early, but no undefined behavior here) For jumps in the "delay slot" (between the target set and the actual jump), my first idea was to forbid this, and when this happens, trap when trying to jump with "jmp2target". The way to solve this was to have a flag in the jump register indicating an incorrect value. If we try to jump with this flag -> trap. This flag is set to 1 by default and reset to 1 every time a jump happens. This flag is then set to 0 when we define a target with "set_jmp". Of course, these are just ideas and are adjustable. Agner wrote: Multiway branches are particularly difficult to predict, and they could benefit from a system that calculates the target address in advance. For example, a byte code interpreter typically loads one byte at a time and uses this code as index into a multiway jump table. The software can easily read the next byte code in advance, fetch the target from the jump table and store it in a register.That was exactly that kind of code that makes me start this reflection on the subject (with vtables that are basically a jump table).
I think it would be better to have separate registers, that can live in the front-end world, and the "set_jmp" instruction is the interface between back-end and front-end worlds. Maybe you could add regular branch prediction when you have to wait. You will still benefit from the knowledge of the jump target by avoiding to flush the whole pipeline (only a part of it), but this could be much more complicated (I have no clue on this point). I wonder about having multiple targets at the same time. It would be very handy for nested loops and nested control-flow in general, but some security issues arise. I hope my explanations are understandable enough. |
Reply To This Message |
Jump prefetch? |
---|
Author: | Date: 2017-02-01 01:54 |
Agner wrote:[...]That is sounding a lot like a Decoupled Access Execute architecture. Some documents about this: https://cseweb.ucsd.edu/classes/wi09/cse240c/Slides/26_decoupled.pdf http://personals.ac.upc.edu/jmanel/papers/euromicro98.pdf (very interesting paper that has an estimation of the relative payoff of the different mechanisms seen in Out-Of-Order CPUs) http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.28.430&rep=rep1&type=pdf http://spirit.cs.ucdavis.edu/pubs/conf/dynamicarch.pdf One general problem with that kind of architecture is that it's hard to keep them simple, and they tend to end up with designs that are more complex than the simpler OOO cpus (in which case it's hard to argue against going with the OOO design).
If the operands are not available yet, then the instruction is passed on to the out-of-order back end.A lot of simpler Out-Of-Order CPUs do something like this for memory address operands... For instance, the classic Athlon initiates memory loads and stores in-order. Due to this, instructions that generate memory addresses and don't depend on loads/stores will generally run quite earlier in the execution stream, and the readiness of the operands gets tracked by the scheduler. Maybe you could add a priority flag to instructions that affect control flow so that the scheduler will run them early (and possibly instructions that affect load/store addresses too).
We can have a "register cache" in the in-order front end. Every time the decoder sees an instruction that writes to a register, it will mark the entry for this register as dirty and remember which instruction it was. When this instruction is later executed (in order or out of order) the value is written to the register cache and marked as clean (unless another instruction writing to the same register has been encountered in the meantime). Simple instructions can execute in the in-order system if all their input operands are ready. The result is stored both in the register cache and passed on to the register renamer/allocater.Afaik, the case that really kills is this: Somewhat predictable branches that depend on memory operands (especially rarely used memory values that are loaded late and come in from L2 or later). In C++ code this shows up as checks on rarely used or rarely changed flags to trigger rare special cases - for instance, logging errors on exceptional cases. This leaves yo with jumps that you're 99.99% sure will never be taken but will still take dozens of cycles to retire. I'm not sure I've seen any other solution to this than speculation and branch prediction (aside from the Transmeta Crusoe's strange backup-restore of the whole CPU state and Itanium's "insert omniscient compiler here" scheme)... and this mess has a kinda massive cost as far as I can tell (need to backup the result of every single instruction going through the pipeline, need large register file, need aggressive register renaming due to having so many renames per cycle). |
Reply To This Message |
Jump prefetch? |
---|
Author: Agner | Date: 2017-02-01 03:12 |
Hubert Lamontagne wrote:That is sounding a lot like a Decoupled Access Execute architecture.Thanks a lot for the links. Yes, this resembles what I am thinking about. The papers you are referring to are from the 1990's. I found a few newer articles on the topic, but no real progress: www.jarnau.site.ac.upc.edu/isca12.pdf www.diva-portal.org/smash/get/diva2:635872/FULLTEXT02 These articles still don't solve the problem of how to split the instruction stream into a control stream and a compute stream. This makes me think of the following idea. Let the compiler insert an instruction telling which registers are part of the control stream. For example, if register r1 is used as a loop counter, and r2 is controlling a branch, then tell the microprocessor that r1 and r2 should be privileged. Any instruction that has r1 or r2 as destination will go into the control stream. These instructions, as well as all branches, jumps and calls will be executed in the in-order front end as far as possible. The front end should have access to the permanent register file. The back end may have out-of-order execution with register renaming. The renamed registers will be written to the permanent register file when they retire. We have to keep track of whether the permanent registers are up to date. When the decoder sees an instruction that writes to a register, it will mark this register as invalid in the permanent register file and remember which instruction it belongs to. When this instruction is executed (whether in the front end or the back end), the register entry is marked as valid, unless another instruction writing to the same register has been seen in the meantime. The main problem we are trying to solve with this design is that the very long pipeline of superscalar processors is bad for branch instructions. The solution with a branch target register puts the branch in the front end, but not the calculation of the branch target. We would need perhaps 30 or 50 instructions between the calculation of the branch target and the branch itself in order to keep the pipeline full. To reduce this distance, we need to also put the calculation of the branch target, and everything that the branch depends on, in the front end as well. This includes not only branch targets, but also loop counters, branch conditions, indexes into jump tables, and all preceding calculations that these depend on. A separate set of registers and instructions for all this would be too complicated. I think it is easier to use the existing registers and tell the microprocessor which registers are part of the control stream. This wouldn't be too hard to implement in a compiler, I think. The question is how efficient we can make it in the front end. |
Reply To This Message |
Jump prefetch? |
---|
Author: | Date: 2017-02-01 17:21 |
There are two different ways to select values that can be sent to back-end: - Value is never used in any memory address calculation - Value is never used in any branch address calculation They are actually somewhat orthogonal: Non-indexing and non-conditional values can be marked in SSA form: starting from the bottom, any value that gets "garbage collected" (goes out of scope, runs out of destination operations) without ever going to a memory load/store address or branch/jump-address can be marked, which in turn allows marking preceding values as non-indexing and/or non-conditional. The proportion of non-indexing and non-conditional values depends on the program (the test I did ended up with about 80% non-indexing values and 60% non-conditional values but this might not be typical at all). It might be very hard to perform this kind of marking through function calls or complex control flows though. |
Reply To This Message |
Jump prefetch? |
---|
Author: Agner | Date: 2017-02-02 01:11 |
Hubert Lamontagne wrote:They are actually somewhat orthogonal:My intention is clearly number 2. The purpose is to shorten the branch misprediction delay to probably 3 clock cycles: fetch, decode, execute. The front end may allow reordering in the sense that an instruction can wait for an operand without delaying subsequent independent instructions. But it should not have register renaming in the front end. I am focusing on branching here, which is clearly a weak spot in current superscalar designs. I am not intending to use this as a way of prefetching memory operands. The out-of-order mechanism is able to handle delayed memory operands satisfactorily. My vision is this: The compiler is indicating which registers it intends to use for control flow. For example, it may use register r1 for loop counter, r2 for a branch condition, and r3 for index into a jump table. All instructions that modify r1, r2, and r3 will execute in the front end if possible. The remaining instructions are sent to the the back end. Instructions that have any of these registers as input operand will have the register operand replaced by its value before it is sent to the back end. Other register operands that are available in the permanent register file can likewise be replaced by their values. Instructions that execute in the front end may write their result not only to the permanent register file but also to any pending instruction that needs it. The result of this mechanism is that a loop that is controlled by the marked registers only will be unrolled in the front end. The instructions in the loop body will be passed on to the back end with the loop counter replaced by its value. The back end has register renaming so that it can execute the body of the second iteration before it has finished the body of the first iteration. All other control flow, such as branches, jumps, calls, returns, etc., will be unrolled in the front end as well so that control flow is effectively resolved before execution as far as it does not depend on delayed data. The demand for advanced branch prediction is somewhat relaxed if we can drastically reduce the branch misprediction delay in this way. The branch prediction machinery in modern processors is very complicated and requires a lot of silicon space for branch target buffers and pattern/history tables. We can cut down on this and use a simpler branch prediction scheme if branches are resolved in the front end. The front end may fetch and decode speculatively, but not execute speculatively. It may even fetch both sides of a branch that it expects to have poor prediction. Fetching both sides of a branch becomes cheaper here than with a traditional superscalar design because it only needs to duplicate two pipeline stages (fetch and decode) in order to minimize branch misprediction bubbles. The fetch and decode stages should have higher throughput than the rest of the pipeline so that it can catch up and fill the pipeline after a branch bubble. The fetch stage should be able to fetch a large block of code in one clock cycle, possibly crossing a cache line boundary. The decoder should be able to decode the first one or two instructions in a fecthed code block in the first clock cycle, and determine the starting points of the remaining instructions. This will allow it to decode maybe four or more instructions in the second clock cycle after fetching a block of code. The aim of this is to keep the branch misprediction delay as low as possible. In case the front end ALU is idle because there are no instructions that write to marked registers, it may execute any other instructions if their operands are ready. |
Reply To This Message |
Jump prefetch? |
---|
Author: Agner | Date: 2017-02-14 13:20 |
This idea about decoupling the control flow from execution is now included as a proposal in the manual version 1.06 (chapter 8.1). |
Reply To This Message |
Jump prefetch? |
---|
Author: | Date: 2017-01-31 01:55 |
csdt wrote:[...]You'd need to compute the conditional at minimum ~10 instructions before the jump, even in the best case on the shortest pipelines (and with a bypass path straight from the register bypass network to instruction cache address input!). For instance, with a 3-instruction-per-cycle cpu:
I guess this could also sorta make sense for floating point or SIMD conditional branches (which often have scheduler penalties and slow value hoisting paths). Also, you don't need to calculate the branch target early (that can be supplied by the branch target predictor), only the conditional value. So maybe instead of a special jump, you could simply do this by letting the branch predictor have some special port that can read flag values early, which means that it could get 100% prediction if the flag values have been "cold" for long enough when it reaches the jump. |
Reply To This Message |
High precision arithmetic |
---|
Author: | Date: 2017-03-21 03:41 |
Agner the code to do an addition between "big integers" that is at page 85 of the forwardcom manual is applicable to any CPU that can do vector integers addition? It could work with x86 using PADDB? This idea is applicable to all other operators (-, *, /, &...) right? |
Reply To This Message |
High precision arithmetic |
---|
Author: Agner | Date: 2017-03-21 04:16 |
fanoI wrote:Agner the code to do an addition between "big integers" that is at page 85 of the forwardcom manual is applicable to any CPU that can do vector integers addition?Yes It could work with x86 using PADDB?Better use VPADDQ This idea is applicable to all other operators (-, *, /, &...) right?Only + and -. * and / are more complicated. & is straightforward because it has no carry. |
Reply To This Message |
Intel's Control-flow Enforcement Technology |
---|
Author: | Date: 2017-04-13 20:12 |
FYI, Intel has an upcoming solution for protecting return addresses: https://software.intel.com/sites/default/files/managed/4d/2a/control-flow-enforcement-technology-preview.pdf It would be interesting to compare ForwardCom's approach to CET. |
Reply To This Message |
Intel's Control-flow Enforcement Technology |
---|
Author: Agner | Date: 2017-04-14 00:53 |
Joe Duarte wrote:Intel has an upcoming solution for protecting return addresses: https://software.intel.com/sites/default/files/managed/4d/2a/control-flow-enforcement-technology-preview.pdfThanks for the info. Intel's solution is quite complicated because it has to be compatible with legacy code and legacy processors. ForwardCom's solution is quite simple: A separate call stack which stays in the CPU most of the time, and jump tables placed in read-only memory. This just shows the difference between patching an insecure system and designing a system to be secure from the beginning. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2017-04-27 00:05 |
I have started a new thread with a proposal for re-linkable libraries in ForwardCom: here |
Reply To This Message |
Assembler with metaprogramming features |
---|
Author: Agner | Date: 2017-07-27 02:06 |
I am working on making an assembler for ForwardCom, and I think that the design of assemblers could use an update, now that we are redesigning everything anyway. I found that it was not too hard to make the assembly syntax look very much like C or Java, rather than the terrible syntaxes of many current assemblers. This will surely make it easier to write assembly code and make the code easier to read. My plans for the assembler so far are as follows:
Branches and loops can be coded either with conditional jumps or as high-level constructs like if, else, switch, for, while, do, break, continue, with {} for block nesting. I also intend to support preprocessing directives like #define, #include, #if; #else, #endif. Now, a more tricky problem is macros and metaprogramming. Most assemblers have macro features that allow metaprogramming, but the syntax is often complicated, confusing, and inconsistent. Such features are probably needed, but I would like a more logical syntax. I have looked at high-level languages to see if they have useful metaprogramming features. Metaprogramming in C++ is possible with templates, but this is very convoluted because the template feature was designed for a more narrow purpose. I would prefer something more similar to the support for self-modifying code in script languages. You make a string variable and then issue this string to the assembler and have it interpreted as assembly language. We need a way to distinguish between code that is generated for run-time execution, and meta-code that is executed in the assembler. My idea so far is to use a dot (.) or percent (%) or some other symbol to indicate assembly-time meta code. For example, a loop that adds the numbers from 1 to 10 at runtime will look like this: The assembly-time variables will be dynamically typed, using 64-bit integers, double, or string, depending on the context. All common operators are supported. String expressions can be converted to assembly code. For example: With this proposal, we have three kinds of code. For example, "#if" will branch at the preprocessing stage, ".if" will branch at the assembly metaprogramming stage, and "if" will generate executable code that branches at runtime. It may be confusing with three types of code, but at least the "#" and "." prefixes will make the distinction clear. The preprocessing directives (with #) are executed before the assembler interprets any code, so it has no knowledge of anything defined in the code, while the metaprogramming features (with .) can check e.g. if a function is defined before making code that calls it, or it can make different code depending on the type or size of a defined data object. I would like to hear your comments before I implement all this. |
Reply To This Message |
Assembler with metaprogramming features |
---|
Author: | Date: 2017-08-11 09:13 |
This looks like the most convenient and easiest to use assembler I have ever seen. I'm really looking forward to playing around with it. Are there reasons why the preprocessing stage and the metaprogramming stage can't be merged into one stage? |
Reply To This Message |
Assembler with metaprogramming features |
---|
Author: Agner | Date: 2017-08-11 10:00 |
Kai Rese wrote:Are there reasons why the preprocessing stage and the metaprogramming stage can't be merged into one stage?The preprocessor has no knowledge of anything defined in the code. For example, it irritates me that you cannot do something like #if defined(myFunction) or #if sizeof(myStructure) > 8 in C or C++.
|
Reply To This Message |
Assembler with metaprogramming features |
---|
Author: | Date: 2017-08-14 22:33 |
Agner wrote:I think it is convenient to have preprocessing directives like #include and #if resolved before the assembler begins to interpret anything else because programmers are used to this concept and it is completely transparent what happens. If we would use some kind of metaprogramming for the same purpose then it becomes less obvious at what stage in the process these directives are executed. This is a valid point, I still have to think about that. My idea was to use metaprogramming as an extension to the preprocessor. For example, variables defined on meta-level could be treated as preprocessor-macros by the actual assembler. This policy also solves the ambiguity problem: I have encountered a problem with this strategy, though. Consider the following code, where % means define an assemble-time variable:If the symbol always gets set in the meta-code and always gets inserted in the assembler code, this example would always produce: R1 = R2 We usually do not know the content of a register at assemble time. For that reason, we should be able to automaticly make a symbol an alias when defining with a register. Function arguments should also be able to be defined as aliases automaticly.
A possible solution to this dilemma is to define a symbol for alias names. For example:This is a trade-off that has to be made. How many features do we need and how much complexity is still easy enough to use? My vision was usage of the metaprogramming-features in a template-like fashion. For example, this is a template for an engine written in python, called mako. It outputs html-code:
My suggestion is aiming for a fairly simple code, maybe at the price of a more complex interpretation. I really don't know if this would be a better way than the current way of using different prefixes. It could also cause more problems with debugging, as you mentioned. There also might be many other problems I'm currently not seeing, but it is an idea nonetheless. |
Reply To This Message |
Assembler with metaprogramming features |
---|
Author: Agner | Date: 2017-08-14 23:44 |
Kai Rese wrote:If the symbol always gets set in the meta-code and always gets inserted in the assembler code, this example would always produce:The meta-code still needs access to its own variables, for example a loop counter. Perhaps we can use a percent sign to indicate that a meta-variable is set or modified.
|
Reply To This Message |
Assembler with metaprogramming features |
---|
Author: | Date: 2017-08-15 21:04 |
With what we have so far, it should work like this: %A = R1 // sets A as alias %A = R1 + 5 // could be allowed as complex symbol, could also get messy int64 R2 = A // allowed if A is defined, could be a move or a set instruction A = R2 // allowed if A is an alias, otherwise not allowed |
Reply To This Message |
Number of register file ports in implementations |
---|
Author: | Date: 2017-08-22 12:46 |
I've taken another quick look today at the specs to figure out how feasible a Verilog implementation running on a FPGA would be, and I'm finding that register file read port counts are on the higher end... 2 ports per AGU (address generating unit), plus 4 ports per ALU (3 operands + 1 mask) and 4 ports per vector unit (3 operands + 1 mask)... This also affects the size of the input dependency tracking mechanism (and, I guess, the register renamer). So I'm wondering, how many ports can modern register files manage, how does it scale with chip size and frequency, and how many ports are realistic on a modern mid-sized FPGA? (like, a 30$ FPGA or something?) |
Reply To This Message |
Number of register file ports in implementations |
---|
Author: Agner | Date: 2017-08-23 00:37 |
Hubert Lamontagne wrote:register file read port counts are on the higher end... 2 ports per AGU (address generating unit), plus 4 ports per ALU (3 operands + 1 mask) and 4 ports per vector unit (3 operands + 1 mask)... This also affects the size of the input dependency tracking mechanism (and, I guess, the register renamer).The idea of ForwardCom is that it should be a high-end vector processor comparable to x86, ARM, etc. but with a better design. The AGU needs addressing modes with base and index as well as the special addressing mode for vector loops. The ALU needs 3 operands for fused multiply-and-add which is required for a state-of-the-art processor, and I think that masking is also required for a high-performance vector processor. I have limited the number of output registers to one in order to simplify dependency tracking and register renaming, following our previous discussion. This means no flags register, no conventional add-with-carry instruction, and no single-instruction stack POP. Current x86 processors have at least the same number of register inputs, plus segment register, and at least two register write ports. My idea is to combine the best from RISC and CISC. The weakness of CISC is that the decoding is complicated, and it is difficult to decode multiple instructions in one clock cycle. They are caching decoded instructions to deal with this, which is a waste of silicon space. ForwardCom deals with this by making decoding simple. The weakness of RISC is that you need more instructions to do the same. For example, RISC-V needs a long sequence of instructions just to calculate the address of a memory operand. If you can do the same with a single instruction, you get a much higher overall throughput. Why do you think register read ports are so expensive? |
Reply To This Message |
Number of register file ports in implementations |
---|
Author: Hubert Lamontagne | Date: 2017-08-27 10:12 |
Agner wrote:The idea of ForwardCom is that it should be a high-end vector processor comparable to x86, ARM, etc. but with a better design. The AGU needs addressing modes with base and index as well as the special addressing mode for vector loops. The ALU needs 3 operands for fused multiply-and-add which is required for a state-of-the-art processor, and I think that masking is also required for a high-performance vector processor.I think this is one of those kind of things where it really depends on which tech you're aiming... Intel and AMD have huge register files with tons of ports on their modern cores. That probably takes up tons of silicon and suck out tons of power (and is probably hard to get running at very high clock rates). But that kind of chip is your target, so it's appropriate. Come to think of it, you're reminding me of the whole segment thing, and that's making me quite curious as to how x86 cores track that - it's the kind of thing you'd possibly want to keep in a separate register file and give it its own register renamer to reduce the level of contention with the rest of the registers. On FPGAs, which I think would make sense for a test implementation, register files with lots of ports are an issue... especially with multiple write ports with no limitations on which registers are concurrently written to. As for RISC-V, they're obviously targeting smaller implementations (kindof like ARM), which are often in-order with 1 or 2 instructions per cycle. At that performance point (and in small out-of-order CPUs), this MIPS-inspired design is pretty much the optimal design (along with ARM-style), as far as I can tell. Though you do have a point, that they might be overdoing simplicity by not having R+R*n addressing modes - although the performance hit of that probably varies a lot per application. I think a very large proportion of memory reads in non vector code are stack pointer or object pointer + offset, which probably reduces the impact of lacking more complex addressing modes. One thing ARM has done is to keep the number of input dependencies per uop low at 2 for the integer part of the core (these ops HAVE to run at low latency), and let vector ops have 4 inputs (you can see the cost of this if you look at result latency for NEON ops - often 3+ cycles!). |
Reply To This Message |
Number of register file ports in implementations |
---|
Author: Agner | Date: 2017-08-28 01:35 |
Hubert Lamontagne wrote:On FPGAs, which I think would make sense for a test implementation, register files with lots of ports are an issue... especially with multiple write ports with no limitations on which registers are concurrently written to.
|
Reply To This Message |
Number of register file ports in implementations |
---|
Author: Bigos | Date: 2017-08-28 15:36 |
Agner wrote:Hubert Lamontagne wrote:
Also, I don't see what would be the benefit of such a partitioning? Of course, some level of partitioning is needed (for example for power-gating unused data paths or separating some per-line units like adders), but that is physical (you could split it for each bit if you wanted); I don't think it would help the register files much - you still need as many read and write ports.
Also, about the register file, the most problematic are the write ports. You can always double the number of read ports by duplicating the register file and having each write take place in both copies; the scaling is less than linear due to the writes being more complicated (have to drive more inputs), but is mostly fine. Minimizing the number of write ports is the most important thing. You can also partition your register space into multiple classes, each of these having a different register file (Intel does it with the kN mask registers). This effectively resets your register file port count, unless you have to duplicate all of the ports in order to support all of the possible register data transfers at the same time. You can, for example, partition the scalar and vector registers into different files (like AMD does) in order to increase the maximum possible throughput if you use both scalar and vector instructions at the same time (Zen can handle up to 10 instructions partially thanks to that, though only for short bursts due to different bottlenecks elsewhere in the pipeline; Intel can only do 7, not counting the separate store port) BTW, a similar partitioning scheme can be used for caches (the most obvious being I$ vs D$). For example, the Mill architecture has 2 different I$ partitions - one for instruction chunks going forward and another for the ones going backward. Each jump target is really a middle of code and each instruction is halved, with half going forward and half going backward. Since each instruction half will only ever go to a single cache, you effectively double the cache capacity without compromising the latency and complexity (just 2x area and power for having twice as much memory). https://millcomputing.com/docs/encoding/ |
Reply To This Message |
Number of register file ports in implementations |
---|
Author: | Date: 2017-08-28 17:05 |
Agner wrote:Hubert Lamontagne wrote:Hmm, interesting. I think I've read that some Intel P2 family cores work something like this somewhere. Though I'm a bit worried that this might make the Reservation Station rather big, since each input operand of each instruction in the Reservation Station has to watch all the write ports all the time and multiplex it in... So your reservation station entry needs a register per operand, plus a multiplexer for each operand so that the appropriate result can be saved. This is kindof moving the hugely-multiple-write-ported register file into the reservation station. I think in that case your permanent register file also still needs about as many ports as in a design where the same register file is used both for permanent and temporary values. It might make sense to go for a design where registers are renamed to different physical registers depending on which unit produces them. For instance, ALUs and memory load units could have a different physical register pool internally (even though they look the same to programmers) so that they don't have to share write ports, and the register renamer can relatively easily keep track of what's in which pool. Generally for FPGAs, register files with many load ports and a single write port tend to be the better plan - the Verilog tool simply creates as many register file copies as it needs. Vectors are implemented as follows. The pipeline is split into 64-bit lanes. Each lane has its own 64-bit register file, reservation station, pipeline, and 64-bit ALU. We can add as many 64-bit lanes as we need to support a certain vector length. There is one scheduler which controls all the lanes. This can run smoothly and fast as long as there is no exchange of data between the lanes. Instructions that move data between the lanes, such as broadcast and permute, are implemented in a separate unit with a single bidirectional data path to all the lanes. Such instructions will take one or two clock cycles extra, as they do in x86 processors.Quite sensible and I'd bet dollars to pennies that most SIMD units are built this way ;3 The in-order front end has its own ALU's which can handle simple integer instructions. Simple instructions on general purpose registers are executed here if their operands are available (as I have proposed in a previous post). Loop conters and simple branches will be executed in the front end so that the instructions of the loop body are sent to the out-of-order back end with the loop counter replaced by its value. This will reduce the branch misprediction penalty. Registers used for memory pointers and indexes will also mostly be resolved in the front end so that the memory operands can be fetched early.Hmm, I wonder if it's possible to track which instructions/operands/values are "front end" and tag them in the instruction cache, so that they can be separately issued? Using some kind of heuristic like "result used in address computation" or "result not coming out of memory load/sufficiently stale by the time it gets used". Like, a front/back-endness predictor ;3 |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2017-09-20 14:03 |
Re variable instruction lengths: Consider some forced alignment, e.g. every 16-byte (or every 32-byte) boundary MUST begin a new instruction. This has minimal impact on the effective code density (if your compiler is not stupid), while allowing almost random access decoding of instructions. This is good for the processor (short dependency chains with respect to instruction decoding/parsing, as opposed to infinite), and even more important for reverse engineering software, debuggers, etc. Even better: Declare that only 16-byte aligned addresses are valid jump targets. Effect: You can decode a huge binary, and be sure that all valid jump targets have been found. It is easy to check whether some instruction is contained in the code or not: use grep. Your compiler should be able to ensure alignment without issuing too many NOPs. Enforce "write XOR execute". This has a lot of advantages. Firstly, security-- but this should rather be the application programmer's job. Secondly, simplified cache coherency: instructions and data use separate caches, and attempting to ensure that a write to an instruction already in the pipeline does correctly roll back... yeah. On ARM you need to explicitly flush the instruction cache for this to work. Third, and the biggest advantage: Consider QEMU. Emulating ARM is a breeze: you KNOW where code is located, and every jump/instruction cache flush triggers software decoding, translation to e.g. x86 and optimization (LLVM), caching of the result, etc, and then continuation. Do not try this with x86. You don't know where instructions start, you don't know whether the code is self-modifying, everything sucks. Btw: self-modifying code, jit-compilers, etc should be minimally affected by such a restriction: Between each self-modification / jit-round, you pay a round-trip to main memory and possibly a syscall (if you don't have permissions to map a page as executable). Indeed, I imagine that your ISA could first find adoption as an intermediate representation (like llvm IR)-- if you ensure that it is very fast to jit/compile from your language to fast x86 and arm, and pretty fast to verify properties of code. Bonus request: Performance-hint/NOP: Allow the instruction to hint at likely-ness, often the compiler/programmer can predict the branch pretty well. Bonus request: The stack area, or some part of it, is always in a incoherent cache-state: The current core/context is the only one that can write/read there. Or maybe provide a separate magic memory location for this, if the stack would clash too much with calling conventions. Desired effect: The compiler/programmer is limited by the number of addressable registers. Often, you will need to temporarily store stuff you need later. In reality, more registers exist. Allowing the processor to avoid hitting the L1-cache and using a renamed/anonymous register should give huge performance boosts in future processors with more anonymous registers running legacy code. At the same time, this makes dependency analysis (in software and hardware) easier. Bonus request: Hardware support for constant-time operations. It turns out that this is quite important in a lot of crypto protocols (timing side-channels). I am not entirely sure how to best help here, but if it was possible to get an accurate cycle counter, and a NOP-until-cycle-xyz instruction, this would probably be supremely useful. Both instruction (count-cycle and wait-until-cycle) are allowed to take very long: these are rare instructions, and it is totally fine to take 10000 a cycle hit! On the other hand, these would also simplify a lot of debugging and performance testing. Re floating point exceptions: I think control registers, trapping and fault-handling are the right way to go for "extremely unlikely" branches. This way, you basically have a conditional branch on every memory access or floating point operation (page fault?), which has a very high cost for mispredictions, and is very cheap on correct predictions (effectively zero overhead). |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2017-09-20 15:23 |
Thanks for your thoughtful suggestions. This project may become a sandbox for many experiments and research on such ideas for more efficient computer systems.
Re variable instruction lengths: Consider some forced alignment, e.g. every 16-byte (or every 32-byte) boundary MUST begin a new instruction. This has minimal impact on the effective code density (if your compiler is not stupid), while allowing almost random access decoding of instructions. This is good for the processor (short dependency chains with respect to instruction decoding/parsing, as opposed to infinite), and even more important for reverse engineering software, debuggers,Currently, the ForwardCom ISA allows three different instruction lengths: 1, 2, and 3 words of 32 bits each. Decoding is easy because the instruction length is determined by only two bits. Forced alignment is applied only to tiny instructions of half-word size, which must be packed two-by two at word boundaries, and you can't have a jump target in between. The problems you are trying to solve certainly exist in the x86 world where it is very complicated to determine the length of an instruction, and some compilers put data in the code segment (i.e. jump tables). But I don't see any big problems in ForwardCom. I found that variable instruction length is important because it is difficult for the compiler to predict whether the distance for a relative jump will fit into a certain instruction size. It is easier to leave it to the assembler to find the optimal size of each instruction. Regarding reverse engineering, I have allready made a disassembler. It works so well that the code can be re-assembled. I am soon finished making a high-level assembler. The syntax resembles C, but the programmer has to select the individual instructions and registers. I will put it on github later this autumn. Enforce "write XOR execute".Of course. Executable code is in execute-only sections by default. Execute-and-read access is possible, but write acces should not be allowed. Jump tables and other code pointers are in read-only sections. Self-modifying code is not allowed. Bonus request: Performance-hint/NOP: Allow the instruction to hint at likely-ness, often the compiler/programmer can predict the branch pretty well.Branch prediction matters only for branches that are executed millions of times. Static branch prediction affects only the first few executions of the millions, while dynamic branch prediction affects them all. If I understand you right, the likely-ness hint is a form of static branch prediction, so I don't see the point. The compiler may place the often-executed branches or functions in a hot section and the rarely executed branches, such as error handling, in a cold section that rarely pollutes the cache. I see this as a pure compiler issue which does not affect the ISA design. The stack area, or some part of it, is always in a incoherent cache-state: The current core/context is the only one that can write/read there.Each thread has private stack memory by default which is not accessible to other threads, except when legacy software requires otherwise. I am not sure I understand you, but maybe this is what you mean? Hardware support for constant-time operations. It turns out that this is quite important in a lot of crypto protocolsFunny that you should mention this. I got the same idea a week ago. The next version will have optional support for a mask bit that dictates that execution time should be independent on the values of operands. This does not necessarily imply that execution time is predictable, only that it is independent of the values of sensitive data. Branches can be replaced by predicated instructions, and small lookup tables can be replaced by vector extract or vector permutation instructions. Re floating point exceptions: I think control registers, trapping and fault-handling are the right way to go for "extremely unlikely" branches. This way, you basically have a conditional branch on every memory access or floating point operation (page fault?), which has a very high cost for mispredictions, and is very cheap on correct predictions (effectively zero overhead).Fault trapping can be turned on or off for both floating point errors and integer overflow. Only problem is that the behavior depends on the vector length if an error occurs in a single vector element. NAN propagation is offered as an alternative if you want to trace errors to a single vector element and you want identical behavior on processors with different vector length. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2017-09-20 16:22 |
>Currently, the ForwardCom ISA allows three different instruction lengths: 1, 2, and 3 words of 32 bits each. Decoding is easy because the instruction length is determined by only two bits. The first problem with variable instruction lengths is that code has, in worst case, three different interpretations, depending on the entrypoint. I agree that this price is likely worth paying for the increased code density. My problem is the long-range dependency of the correct decoding, depending on the entrypoint. Without restrictions, this dependency is effectively infinite-length. Q: what has x86 code and DNA in common? A: You need a hidden markov model to guess at the decoding if you don't know the entrypoints. This is bad, I don't want to care about BLAS when writing a disassembler. To take a larger number: Suppose you say that every aligned 16 word block (nice, cache-line!) MUST begin a new instruction. Now, the length of the dependency of the "correct reading frame" for your code has shrunk down to one cache line. What does this cost us? The very worst case is that we need to pad two NOPs when trying to emit a 3-word instruction near the end of a line. In other words, the code becomes 2/16 = 12.5% less dense, if the compiler can do no re-orderings at all. Code where every instruction is dependent on the previous one will be dog-slow anyway, so meh. My maybe too strict initial proposal of cutting at 4 words, would give a maximal overhead of 50%, i.e. double the code length-- for code where no reordering is possible, and the lengths are maximally bad. Since instructions of length one are most frequent, and most of the time you have a little bit of freedom to reorder, this should not occur too often. Restricting jump targets to certain alignments should likewise have only limited effect on code-length, except for the case where we have tiny tiny loops. Unfortunately, only allowing jumps to beginnings of cache-lines is probably too restrictive, otherwise we could get unambiguous decoding. This would simplify a lot of stuff, e.g. because the processor designer could decide to dynamically translate code into some internal uops, unambiguously and cached and independent of actual execution flow. In other words: We would need to decode instructions only on mapping pages as executable. Also, it would simplify verification of code properties. For example, NaCL (google native client) uses such alignment restrictions to make the decoding of NaCL-compliant code statically unambiguous. >Execute-and-read access is possible, but write access should not be allowed. Now that you mention it, why not go full Harvard? Would it be really bad to disallow read access to executable pages? Again, the idea is dynamic translation: the original code you want to read might have been swapped out to tape years ago, and if you really need access, just map it into a different page. Sidenote: Apple's iOS is almost Harvard. Mapping a page to executable triggers code signature checks and loads of crypto, and removes writable. I think they still allow read access, though. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2017-09-20 23:52 |
yeengief wrote:the processor designer could decide to dynamically translate code into some internal uops, unambiguously and cached and independent of actual execution flow.Many x86 processors now have a micro-op cache because the decoding of instructions is a bottleneck. The ForwardCom ISA is designed so that decoding is very simple, never requiring more than one pipeline stage or one clock cycle. My idea is that a micro-op cache should never be needed. It should be easy to decode several consecutive instructions in parallel. Regarding Harvard architecture, I imagine that there will be a code cache and a data cache, and perhaps a third cache for read-only data sections. But the hardware designer is of course free to choose some other configuration. The call stack and data stack are separate. The call stack will be so small that it can fit into its own little cache throughout the execution of a program, except for deeply recursive functions. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2017-09-21 07:54 |
>Many x86 processors now have a micro-op cache because the decoding of instructions is a bottleneck. The ForwardCom ISA is designed so that decoding is very simple, never requiring more than one pipeline stage or one clock cycle. Yes, but decoding is still sequential and not parallel. My proposal makes decoding ridiculously parallel. New number proposal: 8 words, a half cache-line. The beginning of every 8 word block must begin a new instruction. Only the beginning of 8-word blocks are valid jump targets. Hence, I can never jump into the middle of an instruction. (Alternative proposal: Decoding of instruction lengths is always relative to the most recent 8-word boundary. Entry into the middle of an instruction is a fault. Worst case price: The decoder must look back 7 words on a jump and decode the instruction lengths in order to determine whether the jump was faulty. This does not require a lot of extra instruction fetching because these 7 previous words are in the same cache-line anyway). Worst case overhead for instruction alignment restriction: 25% is NOP. Worst case for jump target restriction: Harder to compute, but compilers should be able to manage without too many NOPs. Gain: Decoding is ridiculously parallel, because I can decode every 8-word block independently. If decoding becomes the bottleneck, I just plug in more decoding units (i.e. we might still be limited by decoding latency, but never by decoding throughput). The longest possible dependency chain in decoding is 7. Your proposal has infinite dependency chains in decoding (you must know the reading frame).
Another bag of problems we can avoid: Suppose some program uses code in two valid decodings, dependent on entrypoint. This means that your branch predictor, uop-cache, etc must work on the tuple (memory location, reading frame), where reading frame is one of 0,1,2. With my proposal you cut this down to just memory location, which simplifies the logic a lot-- because the logic for caching uops or other processor generated metadata is the same logic as for caching data and just maps address -> stuff. The only real-life code that does such stuff (e.g. uses a 2-word instruction as a different instruction by jumping into the middle) is shellcode. While shellcode is fun and all, this is maybe not the most important customer group. You target "real" compilers, and not return oriented programming compilers. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2017-09-21 12:54 |
yeengief wrote:Yes, but decoding is still sequential and not parallel.My plan is indeed that decoding in parrallel should be possible. Assume that you are loading 32 bytes (= 8 words) of code in one clock cycle. Each 4-byte word has two bits that may determine instruction length. This gives 16 bits in all. You need a combinational logic circuit with 16 inputs to determine where all the instruction boundaries are. This can easily be done in one clock cycle. There is no reason to decode too far ahead because mispredicted branches may spoil the advantage. I think it is better to spend your resources on decoding multiple possible branches simultaneously when facing a branch with poor prediction. And, as I said, a micro-op cache is not needed. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: | Date: 2017-09-21 14:47 |
>My plan is indeed that decoding in parallel should be possible. Assume that you are loading 32 bytes (= 8 words) of code in one clock cycle. Each 4-byte word has two bits that may determine instruction length. This gives 16 bits in all. You need a combinational logic circuit with 16 inputs to determine where all the instruction boundaries are. This can easily be done in one clock cycle. There is no reason to decode too far ahead because mispredicted branches may spoil the advantage. I think it is better to spend your resources on decoding multiple possible branches simultaneously when facing a branch with poor prediction. And, as I said, a micro-op cache is not needed. If you really, really don't want this,.... But I am telling you that you will regret this choice the next time you need to software-decode gigabytes of data (e.g. malware sample database, memory snapshots for forensics) or when you want to write a QEMU module for emulating your processor, or when you want to implement a sandbox for untrusted code (see NaCL, web-assembly, etc), where you want a whitelist of acceptable instructions (in order to protect from unknown kernel/driver bugs). And every hacker will tell you that the ambiguity of instruction decodings is absolutely bonkers (hey, I overflowed over a function pointer but have no w&x page.. let's jump into the middle of some benign instruction to give an entirely different meaning to the rest of the code). In fact: How would you software decode a big binary blob and check whether it contains a forbidden instruction? Sequentially, and letting all but one core idle? Speculatively and parallel, throwing away 2/3 of the work? Ah, this does not really work: You cannot throw away 2/3 of the misaligned work because you wanted to reach all possible decodings that the processor can see. Hence, your data has grown by a factor of three during "complete" decoding, and you will need to use complex and faulty heuristics to remove unneeded stuff (oh, if I jump here then I will hit a invalid instruction down the road, by symbolic execution. Hence, this probably was not the right reading frame.... Hah! it was an undocumented instruction and you just missed the crucial part of the code!). This is not hypothetical. This hell is reality, today, behold IDA. Please don't make more of it. > And, as I said, a micro-op cache is not needed. It is needed if I want to run your binary in QEMU on a different architecture. The very first users of a new architecture are people who want to code/debug on an emulated version, long before the first chip is produced. And who knows whether a hardware micro-op cache will be needed in the future? By allowing jumps into the middle of instructions you will force everyone in the future to support such legacy behaviour which is effectively good for shellcode only. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2017-09-23 00:43 |
Enforcing instruction alignment at cache boundaries doesn't help much, because malicious code can jump over the boundaries. You would have to enforce alignment of all jump targets as well, which means a lot of wasteful NOPs in the code. Another possible solution is to use the same principle as in UTF-8 code: single-byte UTF-8 codes and continuation bytes can be distinguished by the first bit. A similar principle could be implemented in ForwardCom. The first word of a multi-word instruction has the most significant bit = 1. The last word of a multi-word instruction has the most significant bit = 0. A single word instruction has the most significant bit = 0. This will make it impossible to execute stealth code by misaligning an entry. To implement this feature we need an extra bit in the first word of each multi-word instruction to replace the bit we have used in the last word. If the last word contains a 32-bit payload such as a 32-bit address or a floating point number, then all binary tools will need to have the extra complication of inserting the missing bit. This is a clumsy solution, but possible. The first word of multi-word instructions has no unused bits so we will have to sacrifice some other feature if we want to make space for this extra bit. We have to make a cost benefit analysis to see if this feature is worth the costs. The costs of making a bit to distinguish start words from end words are: fewer bits for instruction features, clumsy and complicated binary tools, and slower emulation. The advantage is easier analysis of potentially malicious code. I wonder it the kind of analysis you are talking about actually is an efficient weapon against malicious code. It is not easy to jump to an unauthorized address on ForwardCom without detection. All jump tables and function pointers are in read-only memory, and the call stack is separate from the data stack and isolated from everything else. The only way a malicious code could jump into an unrecognized address is by jumping to a calculated address and make this calculation difficult to trace. A scanner for malicious code should look for any jump to a calculated address and do a more thorough analysis only if such a jump occurs. This extra analysis wouldn't be so time consuming after all because disassembling is fast. I would be more concerned about malicious code calling a system function with a calculated ID. Running in a sandbox, the code would call a benign system function, while under certain conditions or on a certain date, the malicious code would call a different system function with different parameters. An intelligent analysis tool could detect that the code uses a calculated system function ID and calculated parameters, but it may be impossible for even the best analysis tool to detect what the code can do if the calculation of system function ID uses cryptographic techniques. We cannot prevent this kind of attack by code alignment techniques and analysis of binary code. I think the best prevention against threats like these is to make restrictions on the access rights of each executable file. I have proposed the following security principle for ForwardCom: An executable file is not allowed to run unless it has been installed by a standardized installation procedure. This procedure includes the assignment of access right to the executable file, such as whether the file is allowed to access various resources. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: - | Date: 2017-09-22 22:05 |
> Btw: self-modifying code, jit-compilers, etc should be minimally affected by such a restriction: Between each self-modification / jit-round, you pay a round-trip to main memory and possibly a syscall (if you don't have permissions to map a page as executable). I'm not sure about strictly enforcing W^X. I get that it really messes with processor optimizations and all, but applications such as emulators which make heavy use of dynamic recompilation would suffer if they are forced to syscall every time they modify code. https://news.ycombinator.com/item?id=11789169 I haven't looked at this architecture yet, though I do wonder how cache flushing instructions could mess with security issues like cacheline based attacks. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2017-09-23 01:19 |
- wrote:self-modifying code, jit-compilers, etc should be minimally affected by such a restriction: Between each self-modification / jit-round, you pay a round-trip to main memory and possibly a syscall (if you don't have permissions to map a page as executable).I am not too fond of the trend of JIt compilation and byte code. If you are JIT compiling each part of a program separately, you get erratic and unpredictable response times which users hate. I would prefer to compile everything once. ForwardCom is designed to be forward compatible in the sense that a compiled program can run optimally without recompilation on future microprocessors with longer vector registers. This means less need for JIT compiling. Another planned feature is re-linking of executable files. Instead of calling a DLL or shared object, you use static linking to include the necessary library functions in the executable, with the possibility to re-link later if a library function needs replacement. This makes the code run faster than if dynamic linking is used, and it makes it possible to adapt the code to different environments by replacing e.g. a user interface library. I'm not sure about strictly enforcing W^X. I get that it really messes with processor optimizations and all, but applications such as emulators which make heavy use of dynamic recompilation would suffer if they are forced to syscall every time they modify codeJIT compilation is also a security problem. If we allow self-modifying code or code that makes code and executes it immediately, then we have all kind of security problems. I would like to re-think the whole idea of JIT compilation. Emulation is a problem, I agree, if we don't allow dynamic compilation. An emulator running under ForwardCom and emulating something else would need special permission to do dynamic compilation and running the code it generates. The user will be warned when installing the emulator that it needs dangerous special permissions. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Hubert Lamontagne | Date: 2017-09-25 16:43 |
yeengief wrote:That's an interesting proposal, though I think you'd probably end up needing 2 different stacks. The C++ stack is a mosaic of data that the compiler knows that the user "can't" access (like local values of previous functions that are never accessed through a pointer, register saving, return addresses), and data that the compiler knows is "pointer accessible" (values that the code gets the address of). "Pointer accessible" data needs to be readable across threads, too (it's totally legal to have another thread operate on data on your stack, as long as you can guarantee that your function won't exit before the other thread is done). "Pointer accessible" data can't be renamed to a register - it basically has to come out of the same accesses as other memory read/writes and must preserve order and page-faulting behavior (which basically means it has to be in L1 data cache). But the "non-pointer accessible" stack could probably have its own separate memory space, which would make it possible to give it its own separate cache and read/write to it in parallel, and even do register-renaming on it as you propose. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Agner | Date: 2017-09-26 00:05 |
Hubert Lamontagne wrote:yeengief wrote:I am not sure how this differs from my proposal: Each thread has two stacks. The call stack is private and separate from everything else (and usually so small that it can be cached entirely on-chip). The data stack is private to the thread that owns it. This requires some discipline from the programmer. Only static memory or memory allocated by the main thread is accessible to all threads. Or you may have a special malloc_shared function for allocating memory that is accessible to all threads. Semafors and mutexes are placed in shared memory. The private data stack is used for function-local variables and register spilling. Since it is private, it does not need coherence across threads. This makes caching of local data simple and efficient. It is also good for safety, of course. Legacy software needs to turn off the private stack feature if it is sharing local memory between threads. A flag in the executable file header could control this feature. |
Reply To This Message |
Proposal for instruction set - now on Github |
---|
Author: Hubert Lamontagne | Date: 2017-09-26 01:37 |
Agner wrote:Hubert Lamontagne wrote:The difference is that I'm seeing it from the C++ compiler point of view, and how the current ecosystem of malloc+mutexes+threads works in complex applications. As it is, program values that you don't take the address of are indeed truly local, and will show up as renamable values in the SSA pass and not translate into GEP+Load+Store instructions, and can safely be moved into a non-addressable memory like a special stack (or, obviously, a register). This class of values includes register spilling, argument passing, and function return addresses - none of these are "really" visible to the programmer (except for the value), so they can also be moved to a private stack - even in existing C++ code! This doesn't even require special discipline. You can do really crazy optimizations on this "value-only" data, like reordering the reads/writes, assuming perfect alignment and no partial value writes, making page faults and reads/writes from other threads impossible, allowing register renaming and so forth, because you're protected from all the crazy cases since the user doesn't have the address and can't pass it around halfway across the program. As soon as you take a pointer or reference, it automatically becomes visible to anything that you can pass that pointer to, which turns out to be "the whole rest of the process including other threads". This forces you to give it a real memory address and do real Loads/Stores that run in program order and are visible across all threads. And it's only after that, that you can put in your optimizer pass that tries to remove the Loads/Stores if it can prove that it doesn't change anything in program behavior - which turns out, is really hard. Even just reordering Loads/Stores is hard, because it's hard to prove that function pointers will never overlap and rule out even really dumb cases like input and output buffers to a function overlapping out of alignment. The difference is that my proposal is based on pointer sharability: |
Reply To This Message |
New assembler, new version, new forum |
---|
Author: Agner | Date: 2017-11-03 11:24 |
The project is making progress.
double v1 = v2 + v3 It also supports C-style branches and loops, for example: for (int64 r1 = 1; r1 <= 100; r1++) { The manual is updated with many small changes github.com/ForwardCom/manual.
|
Reply To This Message |
Threaded View | Search | List | List Messageboards | Help |