Assembly Language Lab |
|||
The main purpose of this page is for people who already know the some assembly and C to see why it is often very beneficial to use a direct assembly implementation over a pure C implementation. There are, in my opinion, too many programmers out there who just don't know what a difference hand coded assembly can make. I hope to help remedy this situation. If you have other examples, or input (or challenges) regarding this page don't hesitate to contact me. |
GCD
On comp.lang.asm.x86, Jon Kirwan asked for compiler output for the "greatest common divisor" function from the following C implementation:
unsigned int gcd (unsigned int a, unsigned int b) { if (a == 0 &&b == 0) b = 1; else if (b == 0) b = a; else if (a != 0) while (a != b) if (a <b) b -= a; else a -= b; return b; } |
Here's my favorite C compiler with /os/s options (optimize for size, use no stack checking; other optimization options did not help much) versus a human (i.e., me) implementation:
; WATCOM C/C++ v10.0a output gcd: mov ebx,eax mov eax,edx test ebx,ebx jne L1 test edx,edx jne L1 mov eax,1 ret L1: test eax,eax jne L2 mov eax,ebx ret L2: test ebx,ebx je L5 L3; cmp ebx,eax je L5 jae L4 sub eax,ebx jmp L3 L4: sub ebx,eax jmp L3 L5: ret |
; Author: Paul Hsieh gcd: neg eax je L3 L1: neg eax xchg eax,edx L2: sub eax,edx jg L2 jne L1 L3: add eax,edx jne L4 inc eax L4: ret |
A particular weakness of this compiler is that it is not sure what registers to start with. But regardless it is missing a huge number of optimization opportunities when compared with the human implementation. There is no comparison is speed or size. Of course, as a human, I have an immense advantage since I can see mathematical relationships that are missed by all C compilers.
Although xchg is not a particularly fast Intel instruction, it does help make it very tight, and probably not more than a cycle off of optimal for performance. The main loop of the routine exists entirely within an instruction pre-fetch buffer (16 bytes.)
Much as I would like, it is pointless to try and describe the exact performance of the algorithms given above. They are performance limited almost entirely by branch mis-prediction penalties and branch taken clocks.
Update: With the advent of modern CPUs such as the Alpha, Pentium-II or Athlon which have deep pipelines and correspondingly bad branch misprediction penalties, it makes sense to remove branches through "predication". In this case we can use min() and max() functions to remove the inner most if() statement in the original loop:
x = min(a,b); y = max(a,b); while( y!= 0 ) { x = min(x,y-x); // min(x,y-x) y -= x; // (y-x)+(x) - min(x,y-x) } |
But on a 32 bit machine:
min(x,y) = y+((x-y)>>31)&(x-y)So putting in the appropriate substitutions and simplifying as appropriate we arrive at:
int gcd(int x, int y) { int t; y += x; if(y==0) y=1; else do { x += x; t = y-x; // (y-x) - (x) x >>= 1; x += (t>>31)&t; // min(y-x,x) y -= x; // (y-x)+(x)-min(x,y-x) == max(x,y-x) } while(x!=0); return y; } |
Once again, we take the Pepsi challenge and we get:
; WATCOM C/C++ v10.0a output gcd: add edx,eax jne L1 mov eax,1 jmp L2 L1: mov ebx,edx ; 1 dep: edx add eax,eax ; 1 sub ebx,eax ; 2 dep: eax mov ecx,ebx ; 3 dep: ebx sar ecx,1fH ; 4 dep: ecx sar eax,1 ; 4/5 shifters and ebx,ecx ; 5 dep: ecx add eax,ebx ; 6 dep: ebx sub edx,eax ; 7 dep: eax test eax,eax ; 7 jne L1 ; 7 mov eax,edx L2: |
; Author: Paul Hsieh gcd: add edx,eax jne L1 mov eax,1 jmp L2 L1: mov ebx,edx ; 1 dep: edx add eax,eax ; 1 sub ebx,eax ; 2 dep: eax shr eax,1 ; 2 mov ecx,ebx ; 3 dep: ebx sar ebx,1fH ; 3 and ebx,ecx ; 4 dep: ecx add eax,ebx ; 5 dep: ebx sub edx,eax ; 6 dep: eax test eax,eax ; 6 jne L1 ; 6 mov eax,edx L2: |
So I am still ahead, but in theory a good enough compiler could have equaled my results. Unlike the first loop, the performance of this loop is characterizable.
Update: Michael Abrash's famous book "The Zen of Code Optimization" actually says that the pure subtraction method is too slow (this contradicts actual testing that I performed.) I have also gotten feedback from people insisting that divide is better for certain special cases or larger numbers. And the final icing on the cake is that in "Handbook of Applied Cryptography" (ISBN 0-8493-8523-7) they give a "divide out powers of two" algorithm that appears to be a good compromise (but still suffers from the branch mis-prediction penalties.) I will need to investigate this a little deeper, and may need to revise these results. Outside analysis is welcome.
Update: Just as a question of completeness, I am presenting what I have discovered so far. Using what apparently is a classical approach, rather than either dividing or subtracting, we subtract powers of 2 times the smaller value from the larger. Furthermore, rather than pursuing the gcd algorithm all the way, once we have small enough values for a and b, we just perform a table lookup (table filling not shown here.)
#define SLIM (64) static unsigned smallGcds[SLIM][SLIM]; static unsigned uintGcd (unsigned a, unsigned b) { unsigned c, d; /* Divide out low powers of 2 if we can (to decrease a, b) */ d = 1; while (((a|b) & 1) == 0) { a >>= 1; b >>= 1; d <<= 1; } goto start; do { /* Find largest 2^t*b, less than a */ c = b; do { c += c; } while (c <= a); /* a -= 2^t*b */ a -= (c >> 1); start:; /* Make sure a > b, and b != 0 */ if (a < b) { if (a == 0) return d * b; c = a; a = b; b = c; } } while (a >= SLIM); /* Return with precalculated small gcds */ return d * smallGcds[a][b]; } |
Obviously some cases of (a,b) input need to be checked (a=0, b=0 sends the program into an infinite loop.)
I believe that it might be possible to improve upon this by performing table look ups on several bits of each operand at once which could give prime linear factors which could be used to reduce the operands by several steps in one shot. So I am refraining from performing assembly language analysis until I resolve this.
Update: There have been a number of submissions that have come to my attention. The first is from Pat D:
The second submission is from Ville Miettinen who works (or worked) for hybrid graphics using the cmovCC instructions:
Scan array of packed structure for non-zero key
Suppose we want to scan an array of structures just to determine if one has a non-zero value as a particular entry. That is to say, suppose we wish to implement the following function:
unsigned int existnzkey (struct foobar * table, unsigned int tablelength) { int i; for(i=0;i<tablelength;i++) if( table[i].key !=0 ) return table[i].key; return 0; } |
As it stands, this code suffers from too many problems to analyze at the assembly level immediately. Applying some ingenuity and some standard tricks as described on my optimization page I arrive at:
static unsigned int existnzkey (unsigned int tablelength, struct foobar * table) { int i,c,d; d=i=0; c=tablelength; goto Start; do { d = table[i].key; if( d !=0 ) return d; i++; c--; Start:; } while( c!=0 ); return d; } |
While it may look a lot longer and seem to add unnecessary complexity, it in fact helps the compiler by practically giving it a variable to register map. Now many may comment that using goto's or labels is bad and causes compilers to turn off optimizations or whatever. Well, my compiler doesn't seem to have a problem with it, which is why I've used it.
Assuming we have:
|
here's the assembly output of the compiler versus my hand coding:
; compiler xor eax,eax test ecx,ecx je L2 L1: mov eax,[esi] ; 1 U - test eax,eax ; 2 U jne L2 ; 2 V add esi,8 ; 3 U dec ecx ; 3 V jne L1 ; 5 - V +1brt L2: |
; by Pisen Chiang and Paul Hsieh xor eax,eax test ecx,ecx je L2 sub esi,8 L1: add esi,8 ; 1 U sub ecx,1 ; 1 V sbb eax,eax ; 2 U - or eax,[esi] ; 3 U jz L1 ; 4 V +1brt mov eax,[esi] L2: |
The compiler wins the size contest, but mine is a heck of a lot faster because I avoid an extra test and branch in the inner loop. As the comments show, my loop saves a clock (on the Pentium.)
Impressed? Well, don't be too impressed. In a sense all of this should be academic. The real problem here is that we are accessing spaced out elements out of an array of packed structures. If the array is substantial in size then we end up losing for all the extra data loaded (the cache can only load contiguous lines of data.) Splitting the structure up into parallel arrays of data which have locality of reference (i.e., all elements are read/written in the relevant inner loops) will tend to do more for performance than these sorts of optimizations. But even in that situation I don't think most compilers will use a repz scasd.
Nevertheless, in situations where you are adhering to a bad array structure design (like the vertex buffer array used in Microsoft's Direct 3D API) the above code does apply.
63 bit LFSR
The following is a typical cyclic redundancy checkcomputing algorithm. It is basically a deterministic iterating step of a function that takes 63 bits of input scrambles the bits in some uniformly pseudo-random way, then returns those 63 bits. Such algorithms can be used for high performance (but low security) encryption, pseudo-random numbers, testing for data integrity or a hash function.
typedef unsigned long UINT; void TestC(UINT &b0, UINT &b1) { UINT L1, L2, L3; UINT H1, H2; // x = (x>>31)^(x>>30)^(x<<32) (mod 2^63) L1 = (b1<<1)|(b0>>31); L2 = (b1<<2)|(b0>>30); H1 = (b1>>31); H2 = (b1>>30); b1 = H1^H2^b0 &0x7FFFFFFF; b0 = L1^L2; } |
I have implemented this by using 63 of 64 bits in a pair of 32 bit call by reference variables. (My compiler does not support a native 64 bit integer type.)
; compiler lea esi,[edx*2] ; 1 U - shr ebx,31 ; 2 U - or esi,ebx ; 3 U - mov ebx,eax ; 4 U - lea ecx,[edx*4] ; 5 U shr ebx,30 ; 5 V or ebx,ecx ; 7 U - +agi mov ecx,edx ; 8 U - shr ecx,31 ; 9 U - shr edx,30 ;10 U - xor edx,ecx ;11 U - xor edx,eax ;12 U - mov eax,esi ;13 U and edx,07FFFFFFFh ;13 V xor eax,ebx ;14 U |
; by Paul Hsieh and Pisen Chiang mov esi,edx ; 1 U xor cl,cl ; 1 V shld edx,eax,1 ; 5 NP +4c* shld esi,eax,2 ; 9 NP +4c adc cl,0 ;10 U xor edx,esi ;10 V xchg eax,edx ;12 NP +2c xor dl,cl ;13 U |
Here the cycle count is a close one on the Pentium MMX (*in fact pre-MMX CPUs brings the two to a tie since the first shld takes an extra clock to decode.) This is mostly because of the surprisingly slow shld instructions. On Pentium Pro/II, K6 and 6x86MX processors, shld is significantly improved in performance making this a more compelling solution based on size and speed.
The following is optimized purely for performance on Pentium based CPUs.
; by hand for Pentiums mov esi,edx ; 1 U xor cl,cl ; 1 V shl esi,2 ; 2 U mov ebx,eax ; 2 V adc cl,0 ; 3 U lea edx,[edx*2] ; 3 V shr ebx,30 ; 4 U xor edx,esi ; 4 V xor edx,ebx ; 5 U xor al,cl ; 5 V shr ebx,1 ; 6 U - xchg eax,edx ; 8 NP +2c xor eax,ebx ; 9 U |
Quick and dirty bubble sort
Andrew Howe, from Core Designs (makers of Tomb Raider), sent me an extremely short sort loop. Originally written for WATCOM C/C++, I have stripped off the cruft so that you can see it here in its most optimal form.
; by Andrew Howe outerloop: lea ebx,[edi+ecx*4] mov eax,[edi] cmploop: sub ebx,4 cmp eax,[ebx] jle notyet xchg eax,[ebx] notyet: cmp ebx,edi jnz cmploop stosd loop outerloop |
Notice the use of xchg, stosd, and loop, something you just wont see from C compilers. I did not bother working out the timings of this routine. Remember that for some applications, bubble sort will out perform some of the more sophisticated sorting algorithms. For example, 3D applications that need Z-sorting, but can rely on temporal locality to always maintain at least a mostly sorted list. (For another interesting departure along these lines see <<Sorting with less than all the facts>>)
A fast implementation of strlen()
Recently, someone wrote to me with the comment that strlen() is a very commonly called function, and as such was interested in possible performance improvements for it. At first, without thinking too hard about it, I didn't see how there was any opportunity to fundamentally improve the algorithm. I was right, but as far as low level algorithmic scrutiny is concerned, there is plenty of opportunity. Basically, the algorithm is byte scan based, and as such the typical thing that the C version will do wrong is miss the opportunity to reduce load redundancy.
; compiler mov edx,ebx cmp byte ptr [ebx],0 je l2 l1: mov ah,[ebx+1] ; U inc ebx ; V test ah,ah ; U jne l1 ; V +1brt l2: sub ebx,edx |
; by Paul Hsieh lea ecx,[ebx-1] l1: inc ecx test ecx,3 jz l2 cmp byte ptr [ecx],0 jne l1 jmp l6 l2: mov eax,[ecx] ; U add ecx,4 ; V test al,al ; U jz l5 ; V test ah,ah ; U jz l4 ; V test eax,0ff0000h ; U jz l3 ; V test eax,0ff000000h ; U jnz l2 ; V +1brt inc ecx l3: inc ecx l4: inc ecx l5: sub ecx,4 l6: sub ecx,ebx |
Here, I've sacrificed size for performance, by essentially unrolling the loop 4 times. If the input strings are fairly long (which is when performance will matter) on a Pentium, the asm code will execute at a rate of 1.5 clocks per byte, while the C compiler takes 3 clocks per byte. If the strings are not long enough, branch mispredictions may make this solution worse than the straight forward one.
Update!
While discussing sprite data copying (see next example) I realized that there is a significant improvement for 32-bit x86's that have slow branching (P-IIs and Athlon.)
; by Paul Hsieh lea ecx,[ebx-1] l1: inc ecx test ecx,3 jnz l3 l2: mov edx,[ecx] ; U mov eax,07F7F7F7Fh ; V and eax,edx ; U add ecx,4 ; V add eax,07F7F7F7Fh ; U or eax,edx ; U and eax,080808080h ; U cmp eax,080808080h ; U je l2 ; V +1brt sub ecx,4 l3: cmp byte ptr [ecx],0 jne l1 sub ecx,ebx |
I think this code will perform better in general for all 32 bit x86s due to less branching. 16 bit x86's obviously can use a similar idea, but it should be pretty clear that it would be at least twice as slow. (I'm really starting to like this bit mask trick! See next example) Thanks to Zi Bin Yang for pointing out an error in my code.
Here's the C version of the same thing:
int fstrlen(char *s) { char *p; int d; #define M1 (0x7f7f7f7f) #define M2 (0x80808080) #define SW (sizeof(int)/sizeof(char)) p=s-1; do { p++; if( (((int)p)&(SW-1))==0 ) { do { d = *((int *)p); p += SW; d = (((d&M1)+M1)|d)&M2; } while( d==M2 ); p -= SW; } } while( *p!=0 ); return p-s; } |
It compiles to substantially the same thing (on a P6 it should have the same performance) but it has a few problems with it. Its not really portable to different sizes of int (the numbers 0x7f7f7f7f and 0x80808080 would have to be truncated or expanded, and casting p from a pointer to a scalar is not guaranteed to work) and its not any easier to understand than the assembly.
Note that although alignment on reads is not necessary on modern x86's it is performed here, as a means of avoiding a memory read failure. This can happen as a result of reading past the end of the string, but under the assumption that your memory allocator aligns to at least DWORD boundaries.
Update!
Ryan Mack has sent in an MMX version of this loop:
; MMX version by Ryan Mack ; Roughly 13 + 3n + BRANCH clocks on a P-II const unsigned __int64 STRINGTBL[8] = {0, 0xff, 0xffff, 0xffffff, 0xffffffff, 0xffffffffff, 0xffffffffffff, 0xffffffffffffff} /* ... */ pxor mm1, mm1 mov ecx, eax mov edx, eax and ecx, -8 and eax, 7 movq mm0, [ecx] por mm0, [STRINGTBL+eax*8] MAIN: add ecx, 8 pcmpeqb mm0, mm1 packsswb mm0, mm0 movd eax, mm0 movq mm0, [ecx] test eax, eax jz MAIN bsf eax, eax shr eax, 2 lea ecx, [ecx+eax-8] sub ecx, edx |
The individual loop iterations are not dependent on each other, so on a modern out of order processor this is just instruction issue limited. On a P-II we have 6 ALU microops + 1 load microop, which should translate to a 3 clock per 8 byte throughput. On AMD processors there are 7 riscops which is 3 clocks on an Athlon and 4 clocks on a K6 per 8 byte throughput.
Performance measurements on an Athlon show that (ignoring branch misprediction) for long strings this 64 bit method is upwards of 55% faster than the previous 32 bit method (which in turn is 22% faster than the WATCOM clib implementation.) However, for very short strings, this method is as much as 3 times slower then the previous method (which is about the same same speed as as the WATCOM clib implementation.) But keep in mind that if your strings are shorter, the branch mispredicts (which were not modeled in my tests) will take a comparatively larger percentage of the time. In any event, if you intend to use one of these alternatives, you should performance measure them to see which is better for your application.
Update!
Norbert Juffa has written in to inform me of an old trick which is a significant improvement over the device I discovered:
|
I did a little digging and indeed you can find an old thread discussing this on google groups. So here's an implementation in C:
Compared to other solutions on an Athlon, this is indeed the fastest solution for strings up to about 32 characters (the MMX solution is faster for longer strings). This function greatly outperforms the strlen of most libraries.
Sprite data copying
A common question that comes up for programmers that are interested in rendering sprite graphics (i.e., game programmers) is how to write a high performance sprite blitter. In a recent discussion on rec.games.programmer, the questioner had a requirement that an array of pixels from one array be copied to a destination array unless the pixel value was zero (representing a transparency.) The starting code looked something like:
char * scrptr; char * destptr; /* ... */ /* copy scanline except where transparent. */ for(x=x0;x<xl;x++) { if( srcptr[x]!=0 ) destptr[x]=srcptr[x]; } |
Of course this code has several problems with it: (1) It has a branch prediction in the inner loop that is likely to fail very often and (2) The memory is being read one byte at a time which wastes memory bandwidth and increases loop overhead.
That said, this solution is actually good if the memory unit has "write combine" and "write mask" capabilities and that memory write access to the destination is more of a bottleneck than branch prediction, loop overhead or source reads. (This is true if the destination is VRAM, or the CPU is a highly clocked P-II.)
However, if reading the destination is not a problem (if the destination is a system memory buffer), then reading 4 bytes at a time and building a mask is not a bad solution, provided you can build one without using conditional branches (otherwise there would be nothing to gain.) Russ Williams came up with a straight forward approach using the carry flag (it is optimized for the Pentium):
; by Russ Williams pushad mov esi,[pbSrc] mov ecx,[w] shr ecx,2 mov edi,[pbDest] sub edi,esi looptop: xor ebx,ebx mov eax,[esi] add esi,4 mov edx,eax ror edx,16 add al,255 ; cf = 1 if al!=0 or 0 if al==0 sbb bl,0 ; bl = -1 if al!=0 or 0 if al==0 add ah,255 sbb bh,0 mov eax,edx shl ebx,16 add al,255 sbb bl,0 add ah,255 sbb bh,0 mov eax,[edi] ror edx,16 and eax,ebx ; dest &bg.mask (P6: PRS) or eax,edx ; merge with src dec ecx mov [edi+esi],eax jnz looptop popad |
The above loop works great for Pentiums and is out of reach for C compilers because of the use of the carry flag. However the P-II does not like the partial register stall (a non-overlappable 7 clock penalty.) After some discussion motivated by a comment by Christer Ericson, the following trick was observed:
(((D &0x7F7F7F7F) + 0x7F7F7F7F) | D) &0x80808080The above expression will generate a bitmask in the high bits of each byte; 00h indicating a zero byte, and 80h indicating a non-zero byte. With some straight forward trickery you can turn this into 00h for zero and FFh for non-zero, giving us the mask we desire. Here's the code in assembly language:
; by Paul Hsieh sub esi,edi looptop: mov eax,[esi+edi] ; read sprite source bits mov ebx,7f7f7f7fh and ebx,eax add ebx,7f7f7f7fh or ebx,eax ; 80808080 bits have non-zero status of bytes. and ebx,80808080h shr ebx,7 ; move to 01010101 bits. add ebx,7f7f7f7fh ; 80==on or 7f==off values in each byte. and ebx,7f7f7f7fh ; 00==on or 7f==off masks. lea eax,[ebx+ebx] ; eax has 00==on or fe==off masks. (P5: AGI stall) or eax,ebx ; eax has 00==on or ff==off masks. mov ebx,eax not ebx ; ebx has 00==off or ff==on masks. and eax,[edi] ; background bits or eax,[esi+edi] ; merge with sprite bits for final result mov [edi],eax ; draw! add edi,4 dec ecx jnz looptop |
Of course the above loop is not especially suited for Pentiums since practically the whole loop has data dependencies, and there is an AGI stall. These issues can be dealt with by unrolling the loop once and using ordinary addition instead of lea (The edx and ebp registers are available for this).
However, P-II's will attempt to "naturally" unroll this loop, and doesn't suffer from AGI stalls and therefore would stand to benefit much less from hand scheduling of this loop (though it probably would not hurt.) However, since the loop is 22 micro-ops (which is 2 more than the P-II reorder buffer limit) it cannot totally unroll it.
A K6 can also only partially "unroll" this loop because it contains more than 6 decode clocks.
Actually, with the right source, C compilers should have little difficulty coming up with the source to the above (or something very similar.) But I'm not sure how you are supposed to convince your C compiler that it needs to be unrolled once.
Update:After some thought, its obvious that the above algorithm can and should be performed using MMX and some of the new SSE instructions (movntq in particular). I may implement this at a later date.
Another idea, which apparently comes from Terje Mathisen, is to use the P-II's write combining features in conjunction with minimizing branch misprediction penalties. The idea is to use a mask to select between two destination addresses for the write bytes:
; Idea from Terje Mathisen, implemented by Sean T. Barrett sub esi,edi mov edx,temp sub edx,edi ; edx = (temp-edi) ... loop: mov al,[esi+edi] ; al=src cmp al,1 ; cf = 1 if src==0, 0 if src!=0 sbb ebx,ebx ; ebx = -1 if src==0, 0 if src!=0 and ebx,edx ; ebx = (temp-edi) if src==0, 0 if src!=0 mov [ebx+edi],al ; store to dest (edi) or garbage (temp) inc edi dec ecx jnz loop ; well predicted branch |
Of course, temp is a reused system array meant just as a trash bin for transparency writes. The outer loop gets complicated, but in ideal situations, this will make better use of the P-II chipset features without suffering from branch mispredict penalties. The carry flag trick still eludes modern C compilers. (Of course this loop should be unrolled a bit to improve performance.)
However, like the key scan problem listed above, all this may be academic. For most sprites that are rendered once and used over and over again, transparencies and solid source pixels are usually in long runs. So using an alternate encoding which give transparency versus non-transparency runs will usually be more efficient. Blitting transparency would come down to adding a constant to the pointers, and the blitting solid colors would be immediate without required examination of the source.
Update: Paolo Bonzini has written in with a more efficient inner loop. The idea is apply the mask to the xor difference of the source and destination.
Paolo also submitted a trivial MMX loop:
Now its time for MMX
On sandpile DS asked how to perform the following:
|
I thought of two ways:
psrld mm0, 24 ; 0 0 0 A 0 0 0 B packssdw mm0, mm0 ; 0 A 0 B 0 A 0 B punpckhwd mm0, mm0 ; 0 A 0 A 0 B 0 B pmullw mm0, Const_0101010101010101 ; A A A A B B B B |
or
psrld mm0, 24 ; 0 0 0 A 0 0 0 B packssdw mm0, mm0 ; 0 A 0 B 0 A 0 B packuswb mm0, mm0 ; A B A B A B A B punpcklbw mm0, mm0 ; A A B B A A B B punpcklbw mm0, mm0 ; A A A A B B B B |
The first has fewer instructions but higher latency which is ideal for a CPU like the Athlon. The second has more instructions but shorter latency, which is beneficial to CPUs like the K6, Cyrix or P55C. I am not sure which is the better loop for P6 architectures.
Of course once we enter the realm of MMX, the poor C compiler really has nothing to offer.
Generalized shift
Norbert Juffa was studying the Compaq Visual Fortran compiler and thought that generalized shift could be improved. In the Fortran language a general shift is always a left shift, but it may take a negative shift value (which is a right shift) and shift values beyond the word size (in either direction) in bits results in 0. Here's the algorithm I came up with:
The ANSI C standard, requires the we use unsigned int when using the right shift operation due to an unspecified "implementation defined" status of right shifting on signed numbers. If you are wondering why I mention it -- its because this is not well known, and can lead to incorrect resuts. This time I have included Microsoft Visual C++:
The comments indicate the clock on which an ideal 3-way out-of-order x86 machine (i.e., an Athlon) could execute these instructions. Notice the issue stalls for each of these code sequences. This means that there may be a case for x86 CPUs to go with 4-way instruction decoding and issue in the future. In any event, MSVC++ seems to be doing quite well in terms of the final performance, but it is unable to simplify the back to back neg reg0 instructions. The classic software register renaming technique for avoiding a dependency stall (after mov esi, eax the role of the two registers can and should be swapped -- the two registers will be identical after this instruction except that one of them is available one clock later than the other) is also not being used. WATCOM C/C++ is having difficulty with register size mismatching, and is unaware of the standard sbb reb, reg trick.
64bit BCD add
Norbert Juffa has given me some devilishly clever code for performing a 64 bit BCD addition.
; by Norbert Juffa mov eax, dword ptr [x] ; x (lo) mov ebx, dword ptr [y] ; y (lo) mov edx, dword ptr [x+4] ; x (hi) mov ecx, dword ptr [y+4] ; y (hi) ; here: EDX:EAX = augend, ECX:EBX = addend mov esi, eax ; x lea edi, [eax+66666666h] ; x + 0x66666666 xor esi, ebx ; x ^ y add eax, ebx ; x + y shr esi, 1 ; t1 = (x ^ y) >> 1 add edi, ebx ; x + y + 0x66666666 sbb ebx, ebx ; capture carry rcr edi, 1 ; t2 = (x + y + 0x66666666) >> 1 xor esi, edi ; t1 ^ t2 and esi, 88888888h ; t3 = (t1 ^ t2) & 0x88888888 add eax, esi ; x + y + t3 shr esi, 2 ; t3 >> 2 sub eax, esi ; x + y + t3 - (t3 >> 2) sub edx, ebx ; propagate carry mov esi, edx ; x lea edi, [edx+66666666h] ; x + 0x66666666 xor esi, ecx ; x ^ y add edx, ecx ; x + y shr esi, 1 ; t1 = (x ^ y) >> 1 add edi, ecx ; x + y + 0x66666666 ;;sbb esi, esi ; capture carry rcr edi, 1 ; t2 = (x + y + 0x66666666) >> 1 xor esi, edi ; t1 ^ t2 and esi, 88888888h ; t3 = (t1 ^ t2) & 0x88888888 add edx, esi ; x + y + t3 shr esi, 2 ; t3 >> 2 sub edx, esi ; x + y + t3 - (t3 >> 2) ; here EDX:EAX = sum mov edi, z mov [edi], eax mov [edi+4], edx |
The use of carries and rcr instructions opposite their C equivalent (sort of) shows the massive simplifications possible in assembly that are not offered in C.
You can find more on BCD arithmetic here at Douglas W. Jones' website
Pixel bashing
"Gordy" has been working on a software based graphics library and asked on USENET for optimized paths for many common pixel bashing routines. Here are some of the solutions that I presented.
The first is an MMX routine for compressing 4 bit per pixel to 1 bit per pixel in a specific ordering. The pixel translation simply takes the bottom bit from each nibble. The order of the original nibbles in a given qword is: 21436587 (don't ask me -- its Gordy's library.) We can assume that the nibble bits are premasked (i.e., the other bits are 0.)
MagicConst dd 48124812h,48124812h ;.... movq mm7, MagicConst pcmpeqb mm6, mm6 ; 1111111111111111 psllw mm6, 12 ; 1111000000000000 @quads: movq mm0, [esi] ; 0006000500080007 movq mm1, mm6 ; XXXX000000000000 pmullw mm0, mm7 ; 8765705800870070 pand mm1, mm0 ; 8765000000000000 psrld mm0, 28 ; 0000000000004321 psrlw mm1, 8 ; 0000000087650000 por mm0, mm1 ; 0000000087654321 packuswb mm0, mm0 ; [?B?A?B?A] packuswb mm0, mm0 ; [BABABABA] movd eax, mm0 mov [edi],ax add esi, 8 add edi, 2 dec ecx jnz @quads |
Since the bits are isolated, a sequence of shift and ors can be replaced by a sequence of shifts and adds. But a sequence of shifts and adds can be made into a single multiply. From a performance point of view this is a good trade off since pmullw is a fully pipelined instruction and in a modern x86, many internal instances of this loop should be able to execute in parallel. This solution consumes 64 source bytes in just a handful of clocks -- its not likely that any C based formulation could even come close.
The second was for averaging streams of bytes together (and rounding down.) Gordy was looking for a solution for pre-K6-2 MMX/3DNow! (which did not have pavgusb or pavb instructions) as well as a straight integer solution. Here are the solutions I proposed:
; Integer Solution by Paul Hsieh LTop: mov eax, [esi] mov ebx, [edx] mov ebp, eax and eax, ebx xor ebx, ebp shr ebx, 1 and ebx, 7F7F7F7Fh add eax, ebx mov [edi], eax dec ecx jnz LTop |
; MMX Solution by Paul Hsieh LTop: movq mm0, [esi] movq mm1, [edx] movq mm2, mm0 pand mm0, mm1 pxor mm1, mm2 psrlb mm1, 1 paddb mm0, mm1 movq [edi], mm0 dec ecx jnz LTop |
These solutions use the follow observation:
|
While the original (A + B) addition may overflow, the addition (A and B) + (A xor B)/2 will not.
The third routine was one for performing a saturating add. Using MMX its just a trivial application of the paddusb instruction (as "Gordy" pointed out), but what of the integer fallback routine? I came up with the following integer routine:
; Integer solution by Paul Hsieh mov eax, [esi] ; src0 mov ebx, [edi] ; src1 mov ecx, eax mov edx, ebx and ecx, ebx ; first bit carry xor edx, eax ; first bit add (mod 2) and eax, 0xFEFEFEFE and ebx, 0xFEFEFEFE add eax, ebx ; Add top 7 bits (A&0xFE)+(B&0xFE) sar eax, 1 ; >>= 1 to capture carry bits and ecx, 0x01010101 ; Is there a carry to the second bit? add eax, ecx ; (...)>>1 mov ecx, eax and edx, 0x01010101 ; first bit and eax, 0x7F7F7F7F shr ecx, 7 shl eax, 1 ; (...)&0xFE and ecx, 0x01010101 ; overflows or eax, edx ; ((...)&0xFE) + (((A&0x01)+(B&0x01))&1) xor ecx, 0x81818181 ; blockers sub ecx, 0x01010101 ; 0->80, 1->7F and ecx, 0x7F7F7F7F ; 0->00, 1->7F or eax, ecx shl ecx, 1 or eax, ecx |
This time we are seperating the bottom bit to make sure we can perform parallel adds with only 8 bits of work area at a time.
|
The carry over into the 9th bit is translated into a 0x00 or 0x7F mask. By or-ing in the mask into the destination then shifted left by one, this achieves the same effect as if the mask contained 0x00 or 0xFF values.
This time, since the carry flag is captured and used on the first add, the C compiler cannot duplicate this routine.
Update: Oliver Weichhold posted an interesting question in comp.lang.asm.x86:
|
While one might instictively look at clever ways of performing the multiple shifts and masking that might seem to be required, its really not neccesary. Consider that what you really want is a mapping of 16 input bits that you just want placed in some output positions, where each input bit is independent from the other bits.
This just screams "Look up tables" and indeed it leads to some pretty simple code:
movzx eax, word ptr input_A4R4G4B4 movzx ebx, al shr eax, 8 movd mm0, dword ptr Table_G4B4_to_G15B15 [ebx*4] movd mm1, dword ptr Table_A4R4_to_A15R15 [eax*4] punpckldq mm0, mm1 |
The tables can be set up with some kind of simplistic C code at initialization time.
Converting Uppercase
Conventional wisdom say that to quickly convert an ASCII string to uppercase, that one should use a table to convert each character. The problem is that the address mode of x86 processors require 32 (or 16) bit registers, so some kind of byte -> dword (or word) conversion also happens. This leads to partial register conversion which is penalized on AMD CPUs and is devastating to Intel CPUs.
So what is our alternative? Well, if we can amortize our performance using SIMD, then what we are really trying to do is translate the range [61h, 7Ah] to [41h, 5Ah]. The approach I came up with was to exploit the fact that ASCII is only well defined in the range [00h, 7Fh] by rotating the range then masking, then rotating and capturing the middle range into the 5th bit and subtracting that from the 4 input bytes:
; Integer SIMD uppercase(string) by Paul Hsieh
mov eax, src[esi] |
This routine is within reach of most modern C compilers. (Stay tuned for the C version.) A cursory glance, however, reveals that this routine is easily converted to MMX.
Update: The need to be able to support non-ASCII (i.e., include the range [80h, FFh]) has come up. One of the bonuses of being able to do the whole range is that UTF-8 encoded unicode can be directly translated this way. Anyhow, its just a minor change, that unfortunately costs a cycle on the critical path:
; Integer SIMD uppercase(string) on UTF-8 data by Paul Hsieh
mov eax, src[esi] |
Update: Norbert Juffa wrote in to inform me that as part of Sun's opening up of the Solaris source code they have made some of his sources used for some C string functions, including caseless comparisons and tolower available to the public. You can view the source here.
Update: I have designed an
alternative function that deals with the entire UTF-8 range, but is also
more parallel and thus should execute somewhat faster:
mov eax, src[esi]
The C code is fairly comparable:
uint32_t upperCaseSIMD (uint32_t x) {
; Integer SIMD uppercase(string) on UTF-8 data v2 by Paul Hsieh
mov ebx, 0x80808080
mov edx, eax ; src
or ebx, eax ; (0x80808080 | src)
mov ecx, ebx
sub ebx, 0x7b7b7b7b ; (0x80808080 | src) - 0x7b7b7b7b
sub ecx, 0x61616161 ; c = (0x80808080 | src) - 0x61616161
not ebx ; d = ~((0x80808080 | src) - 0x7b7b7b7b)
not edx ; ~src
and ebx, ecx ; c & d
and edx, 0x80808080 ; ~src & 0x80808080
and ebx, edx ; e = (c & d) & (~src & 0x80808080)
shr ebx, 2 ; e >> 2
sub eax, ebx ; src - (e >> 2);
; Integer SIMD uppercase(string) on UTF-8 data v2 by Paul Hsieh
uint32_t b = 0x80808080ul | x;
uint32_t c = b - 0x61616161ul;
uint32_t d = ~(b - 0x7b7b7b7bul);
uint32_t e = (c & d) & (~x & 0x80808080ul);
return x - (e >> 2);
}
Move bits
Norbert Juffa continues to study the Compaq Visual Fortran compiler (soon to become the Intel Fortran compiler) this time looking at the MVBITS command. A description of the function can be found here.
|
; Solution by Norbert Juffa mov ecx, [LEN] ; len mov eax, [FROM] ; from cmp ecx, 32 ; (len < 32) ? CY : NC sbb edx, edx ; (len < 32) ? ~0 : 0 shl edx, cl ; (len < 32) ? ((~0) << len) : 0 mov ecx, [FROMPOS]; frompos not edx ; mask = (len < 32) ? ~((~0) << len) : ~0 shr eax, cl ; from >> frompos mov ecx, [TOPOS] ; topos and eax, edx ; (from >> frompos) & mask shl edx, cl ; mask << topos shl eax, cl ; bits << topos not edx ; ~(mask << topos) and edx, [TO] ; to & ~(mask << topos) or eax, edx ; to=(to&~(mask<<topos)|((from>>frompos)&mask) |
Register slide, register shift
Norbert Juffa, who was looking at some 64bit shift code (for 32bit x86 registers) reduced the problem to finding a good solution for:
|
under the assumption that ecx is either 0 or FFFFFFFF. After we both fumbled around quite a bit going down wrong tracks, Norbert hit upon the right idea:
; Solution by Norbert Juffa sub eax, edx ; a - d and eax, ecx ; c ? a - d : 0 add eax, edx ; c ? a : d and edx, ecx ; c ? d : 0 |
; Solution by Norbert Juffa __declspec (naked) __int64 __stdcall MYLSHIFT64 (const __int64 *i, const int *sh) { __asm { mov ecx, [esp+8] ; &sh mov eax, [esp+4] ; &i mov ecx, [ecx] ; sh mov edx, [eax+4] ; i_hi mov eax, [eax] ; i_lo shld edx, eax, cl ; sll (i,sh & 0x1f) shl eax, cl ; #if (CPU == i386) test ecx, 32 ; sh >= 32 ? jz $lshift_done ; nope, done mov edx, eax ; sll (i,32) xor eax, eax ; $lshift_done: cmp ecx, 64 ; (sh>=64)||(sh<0) sbb ecx, ecx ; (sh>=64)||(sh<0) ? 0 : 0xffffffff and eax, ecx ; (sh>=64)||(sh<0) ? 0 : sll (i,sh) and edx, ecx ; #else /* Athlon, P6+ */ test ecx, -64 ; (sh>=64)||(sh<0) ? NZ : ZR ror ecx, 6 ; (64>sh>=32) ? CY : NC(ZF safe) mov ecx, 0 ; 0 cmovc edx, eax ; i=(64>sh>=32) ? sll(i,32) : i cmovc eax, ecx ; cmovnz edx, ecx ; (sh>=64)||(sh<0) ? 0 : sll (i,sh) cmovnz eax, ecx ; #endif ret 8 ; pop two DWORD parms and ret } } |
64 bit integer question
Phil Carmody (a well known prime number searching hacker) posted to alt.lang.asm with the following question:
|
|
63 bit atomic counter
In comp.lang.asm, Matt Taylor posted that he was in search of a "lockless 63-bit counter" using 32 bit code (certainly using AMD's new AMD64 instruction set would make this trivial, but Matt is limited to 32 bit code.) I.e., just a memory location and code sequence that could perform an atomic increment and read even in multiprocessor scenarios. One issue is the fact that most memory operations are broken up over the bus as individual read and writes so ALU + memory operations will not be atomic -- however this can be dealt with with the lock prefix. The real issue is that its impossible to make two individual 32 bit memory accesses truly atomic.
First let us take a second to show a solution that doesn't work:
The problem is that between [[1]] and [[2]] your thread could be pre-empted by another thread trying to execute the same code. This means that any given cycle the 64 bit value in [counter] can easily be innaccurate, and even grossly so. This is what is known as a race condition. There are endless incorrect ways of trying to patch this sort of code up, which can take some analysis to discover the flaws. I will skip all those as you can/should learn about them in college level OS class.
So lets start by presenting two trivial solutions. The first is using AMD64 code:
So this describes the semantics that we are looking for -- we want to increment and return with the incremented value with respect to the local context. So not only do we require that we do this for 64 bits, but we require that we also get a read back of the counter in a manner that reflects that value before any other thread increments the counter. Of course, given that AMD64 is such a new instruction set, its going to be a little while before we can deploy such a solution widely.
But starting with the Pentium Pro instruction set, even 32 bit x86s have had access to the 64 bit test and set instruction: CMPXCHG8B:
So the idea is to speculatively hope that the counter will remain untouched by other threads from [[1]] through to [[3]]. The CMPXCHG8B instruction allows us to verify this before we commit to the increment operation. If there is any inconsistency (including a mismatch between [[1]] and [[2]] due to an intervening increment operation) then we just put ourselves at the back of the line of pending increments and retry the operation.
We can already see with this solution we've entered a realm where the theory is starting to frown on our assumptions. There is no guarantee that any given thread will eventually complete as it might be stuck in this retry loop for an arbitrary amount of time. In fact, when Matt timed this in a benchmark (presumably meant to really hammer on it with many threads and multiple CPUs) he found the performance of this code to be very unsatisfactory for precisely this reason -- so it has practical consequences too.
The "63 bit counter" comes from the idea of using one of the bits as a kind of flag to indicate that a wrap around has happened. We can assume that most of the time, the increment will only affect as few as 30 bits and that we only need to specifically handle the specific wrap around cases. With some help from Terje Mathisen and Matt Taylor, I came up with the following code (based on the XADD instruction which has been available since the 80486 processor):
The general idea is that the bottom 31 bits are just the bottom 31 bits of the lock, the 31st bit indicates a wrap has occurred, and the 30th bit can also indicate to us whether any noticed discrepancy in the high 32 bits was due to a wrap or not.
With the assumption that no more than 30 bits of (about a billion) increments will happen during a contention we're no longer in anything resembling sound theoretical territory. For example, there is an obvious race condition between [[1]] and [[2]] if, in fact, there were 4.3 billion increments from other threads in between the two operations due to a preemption. Nevertheless, having a billion threads take action between a few instructions is not a realistic scenario, so in practice this should work fairly well.
Atomic primitives like this are always tricky, so it pays to try to see why it works. (Said in another way, you should not trust this code, or anyone else's, unless you understand how it works.)
Ok, so by using XADD on the lower 32 bits, there isn't really a possibility that the lower 31 bits will go wrong. And when the 31 bit wrap occurrs, the exact thread where the wrap occurrs will be the sole thread that tries to correct the high word. When the high word is incorrect, its exactly one less than the correct value and the "lock bit" (bit 31 of the low word) will be set. So it can be seen that the incrementing action keeps the bits in a well defined and deterministic state.
The specific case of the wrapping locker's read is taken care of precisely, so let us assume that we are one of the other threads. When reading, if the high half is consistent with the low half (i.e., by sandwiching the low half read between two high half reads) and there is no lock bit set, then we are set. If there is an inconsistency its because some other thread has wrapped. We can then examine our own bit 30 to see whether we are supposed to be post or pre wrap which tells us whether we should take the later or earlier high half read respectively.
Finally if the lock bit is set, we know that we are post-increment, and we just wait until either the lock has been cleared or the high half changes then just reread the high half.
The commented out CLI and STI instructions show where these instruction can and should go to try to reduce the exposed region of uncertainty in the lock value (leading other threads to spin). On a single processor system they are sufficient to prevent any contention, and in multiprocessor systems it would give a kind of guarantee about the amount of time the counter would not be in a trivial state. But of course you require ring 0 access to execute these instructions, and they are not required for functional correctness.
And in C? Not a player I'm afraid. Even with OS support, the language cannot compete with this kind of low level access.
This article has been translated to Serbo-Croatian here.
If you are a native speaker of non-English language and you wish to translate this article into other languages please feel free to contact me. But do not waste my time with verbatim copies of what Google translate produces.
Stay tuned for more ...