summaryrefslogtreecommitdiffstatshomepage
path: root/py/nlr.c
Commit message (Collapse)AuthorAge
* py/nlr: Implement jump callbacks.Damien George2023-06-02
| | | | | | | | | | | | | | NLR buffers are usually quite large (use lots of C stack) and expensive to push and pop. Some of the time they are only needed to perform clean up if an exception happens, and then they re-raise the exception. This commit allows optimizing that scenario by introducing a linked-list of NLR callbacks that are called automatically when an exception is raised. They are essentially a light-weight NLR handler that can implement a "finally" block, i.e. clean-up when an exception is raised, or (by passing `true` to nlr_pop_jump_callback) when execution leaves the scope. Signed-off-by: Damien George <damien@micropython.org>
* py/scheduler: Implement VM abort flag and mp_sched_vm_abort().Damien George2023-03-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is intended to be used by the very outer caller of the VM/runtime. It allows setting a top-level NLR handler that can be jumped to directly, in order to forcefully abort the VM/runtime. Enable using: #define MICROPY_ENABLE_VM_ABORT (1) Set up the handler at the top level using: nlr_buf_t nlr; nlr.ret_val = NULL; if (nlr_push(&nlr) == 0) { nlr_set_abort(&nlr); // call into the VM/runtime ... nlr_pop(); } else { if (nlr.ret_val == NULL) { // handle abort ... } else { // handle other exception that propagated to the top level ... } } nlr_set_abort(NULL); Schedule an abort, eg from an interrupt handler, using: mp_sched_vm_abort(); Signed-off-by: Damien George <damien@micropython.org>
* all: Reformat C and Python source code with tools/codeformat.py.Damien George2020-02-28
| | | | This is run with uncrustify 0.70.1, and black 19.10b0.
* py/nlr: Fix missing trailing characters in comments in nlr.cstijn2017-12-29
|
* py/nlr: Fix nlr functions for 64bit ports built with gcc on Windowsstijn2017-12-29
| | | | | | | | | | | | | | | | | | | The number of registers used should be 10, not 12, to match the assembly code in nlrx64.c. With this change the 64bit mingw builds don't need to use the setjmp implementation, and this fixes miscellaneous crashes and assertion failures as reported in #1751 for instance. To avoid mistakes in the future where something gcc-related for Windows only gets fixed for one particular compiler/environment combination, make use of a MICROPY_NLR_OS_WINDOWS macro. To make sure everything nlr-related is now ok when built with gcc this has been verified with: - unix port built with gcc on Cygwin (i686-pc-cygwin-gcc and x86_64-pc-cygwin-gcc, version 6.4.0) - windows port built with mingw-w64's gcc from Cygwin (i686-w64-mingw32-gcc and x86_64-w64-mingw32-gcc, version 6.4.0) and MSYS2 (like the ones on Cygwin but version 7.2.0)
* py/nlr: Factor out common NLR code to macro and generic funcs in nlr.c.Damien George2017-12-28
| | | | | | | | | Each NLR implementation (Thumb, x86, x64, xtensa, setjmp) duplicates a lot of the NLR code, specifically that dealing with pushing and popping the NLR pointer to maintain the linked-list of NLR buffers. This patch factors all of that code out of the specific implementations into generic functions in nlr.c, along with a helper macro in nlr.h. This eliminates duplicated code.
* Revert "py/nlr: Factor out common NLR code to generic functions."Paul Sokolovsky2017-12-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 6a3a742a6c9caaa2be0fd0aac7a5df4ac816081c. The above commit has number of faults starting from the motivation down to the actual implementation. 1. Faulty implementation. The original code contained functions like: NORETURN void nlr_jump(void *val) { nlr_buf_t **top_ptr = &MP_STATE_THREAD(nlr_top); nlr_buf_t *top = *top_ptr; ... __asm volatile ( "mov %0, %%edx \n" // %edx points to nlr_buf "mov 28(%%edx), %%esi \n" // load saved %esi "mov 24(%%edx), %%edi \n" // load saved %edi "mov 20(%%edx), %%ebx \n" // load saved %ebx "mov 16(%%edx), %%esp \n" // load saved %esp "mov 12(%%edx), %%ebp \n" // load saved %ebp "mov 8(%%edx), %%eax \n" // load saved %eip "mov %%eax, (%%esp) \n" // store saved %eip to stack "xor %%eax, %%eax \n" // clear return register "inc %%al \n" // increase to make 1, non-local return "ret \n" // return : // output operands : "r"(top) // input operands : // clobbered registers ); } Which clearly stated that C-level variable should be a parameter of the assembly, whcih then moved it into correct register. Whereas now it's: NORETURN void nlr_jump_tail(nlr_buf_t *top) { (void)top; __asm volatile ( "mov 28(%edx), %esi \n" // load saved %esi "mov 24(%edx), %edi \n" // load saved %edi "mov 20(%edx), %ebx \n" // load saved %ebx "mov 16(%edx), %esp \n" // load saved %esp "mov 12(%edx), %ebp \n" // load saved %ebp "mov 8(%edx), %eax \n" // load saved %eip "mov %eax, (%esp) \n" // store saved %eip to stack "xor %eax, %eax \n" // clear return register "inc %al \n" // increase to make 1, non-local return "ret \n" // return ); for (;;); // needed to silence compiler warning } Which just tries to perform operations on a completely random register (edx in this case). The outcome is the expected: saving the pure random luck of the compiler putting the right value in the random register above, there's a crash. 2. Non-critical assessment. The original commit message says "There is a small overhead introduced (typically 1 machine instruction)". That machine instruction is a call if a compiler doesn't perform tail optimization (happens regularly), and it's 1 instruction only with the broken code shown above, fixing it requires adding more. With inefficiencies already presented in the NLR code, the overhead becomes "considerable" (several times more than 1%), not "small". The commit message also says "This eliminates duplicated code.". An obvious way to eliminate duplication would be to factor out common code to macros, not introduce overhead and breakage like above. 3. Faulty motivation. All this started with a report of warnings/errors happening for a niche compiler. It could have been solved in one the direct ways: a) fixing it just for affected compiler(s); b) rewriting it in proper assembly (like it was before BTW); c) by not doing anything at all, MICROPY_NLR_SETJMP exists exactly to address minor-impact cases like thar (where a) or b) are not applicable). Instead, a backwards "solution" was put forward, leading to all the issues above. The best action thus appears to be revert and rework, not trying to work around what went haywire in the first place.
* py/nlr: Factor out common NLR code to generic functions.Damien George2017-12-20
Each NLR implementation (Thumb, x86, x64, xtensa, setjmp) duplicates a lot of the NLR code, specifically that dealing with pushing and popping the NLR pointer to maintain the linked-list of NLR buffers. This patch factors all of that code out of the specific implementations into generic functions in nlr.c. This eliminates duplicated code. The factoring also allows to make the machine-specific NLR code pure assembler code, thus allowing nlrthumb.c to use naked function attributes in the correct way (naked functions can only have basic inline assembler code in them). There is a small overhead introduced (typically 1 machine instruction) because now the generic nlr_jump() must call nlr_jump_tail() rather than them being one combined function.