Memmove causing program crashes, giving SIGTRAP in GDB(?)
Brian Inglis
Brian.Inglis@SystematicSW.ab.ca
Sun Mar 1 07:40:16 GMT 2026
Very good Kennon,
Neat and well researched, and surprisingly minimal!
Hopefully some of those approaches can eliminate all problems with CPU errata or
unfixed bugs, so you no longer hit any crashes, while managing high performance
on fast hardware.
And given the source is in C, it will continue working okay on older and newer
compilers, CPUs, and combos of those, as nowadays little improves, they are only
moving the bottlenecks around, to where your code hopefully will no lomger
notice the problems.
That's the issue I always had with "optimized" assembler: it's all well and good
with today's compiler and CPU, but give it a generation of each, and it's an
unpredictable pile of emoji, good only on old machines (like those I have) ;^>
We have to be able to run the same code on systems ranging from whatever today's
cheap mobile laptop celery-stick-in-the-muds are called, to GPU monster CPUs, to
the fractional or multiple package KCPU servers, with dozens to thousands of
threads on each, variable ISAs, uarchs, cache levels, sizes, and write policies.
That's actually an advantage for CISC ISAs, acting as an HLA, interpreted by the
instruction decoder into highly tuned RISC-like uops for dispatch into multiple
pipelined stages per thread, CPU, and/or package, to hopefully hide any poor
performance issues.
On 2026-02-27 19:30, KENNON J CONRAD via Cygwin wrote:
> I just wanted to add that the stash and store idea you suggest that is also
> used in memmove has a very nice impact on the assembly code.
>
> With the old code that does this for the last 0 to 7 words:
> while (candidate_ptr > score_ptr) {
> *candidate_ptr = *(candidate_ptr - 1);
> candidate_ptr--;
> }
>
> the assembly code shows this from the point where the move starts:
> .L24:
> movdqu -16(%rax), %xmm1
> subq $16, %rax
> movups %xmm1, 2(%rax)
> cmpq %rdx, %rax
> jnb .L24
> movq %r10, %rax
> subq %r9, %rax
> subq $16, %rax
> notq %rax
> andq $-16, %rax
> addq %r10, %rax
> cmpq %rax, %r9
> jnb .L28
> movq %rax, %rcx
> movq %rax, %rdx
> movq %r9, 48(%rsp)
> subq %r9, %rcx
> subq $1, %rcx
> shrq %rcx
> leaq 2(%rcx,%rcx), %r8
> negq %rcx
> subq %r8, %rdx
> leaq (%rax,%rcx,2), %rcx
> call memmove
> movq 48(%rsp), %r9
> jmp .L28
>
> But with stash and store:
> *(uint64_t *)&candidates_index[new_score_rank + 1] = first_four;
> *(uint64_t *)&candidates_index[new_score_rank + 5] = next_four;
>
> the assembly code from the point where the move start is this:
> .L24:
> movdqu -16(%r9), %xmm1
> subq $16, %r9
> movups %xmm1, 2(%r9)
> cmpq %rax, %r9
> jnb .L24
> movups %xmm0, 2(%rdi,%rdx)
> jmp .L26
>
> There are a couple of extra assembly instructions to stash into xmm0 before
> the move, but this is a big reduction in assembly code size for the backward
> memory move. Not as fast as memmove if the DF wasn't getting corrupted, but
> much better than the old code plus it completely avoids the risk of DF
> corruption during rep movsq in memmove for backward move sizes >= 8! I like it
> because there is no need to worry about whether rep movsb or rep movsw could
> also be vulnerable to DF corruption.
>> On 02/27/2026 11:49 AM PST Brian Inglis via Cygwin wrote:
>> Some perf reports and analysis imply that backward moves (with overlap?) are no
>> faster than straight rep movsb on some CPUs, so it may be better to just
>> simplify to that, unless you want to stash the final element(s) to be moved out
>> of the way in register(s), and use multiple registers in unrolled wide moves for
>> the aligned portion?
--
Take care. Thanks, Brian Inglis Calgary, Alberta, Canada
La perfection est atteinte Perfection is achieved
non pas lorsqu'il n'y a plus rien à ajouter not when there is no more to add
mais lorsqu'il n'y a plus rien à retrancher but when there is no more to cut
-- Antoine de Saint-Exupéry
More information about the Cygwin
mailing list