aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/Python/fileutils.c
diff options
context:
space:
mode:
authorSerhiy Storchaka <storchaka@gmail.com>2024-05-05 08:20:06 +0300
committerGitHub <noreply@github.com>2024-05-05 08:20:06 +0300
commit711c80bfca5dd17cb7c6ec26f0e44848b33aec04 (patch)
tree62d8956a00dd3f7c8c5a81fd0e3229ca7c2018ea /Python/fileutils.c
parent5dd36732c850084ce262b7869ed90d73a281296a (diff)
downloadcpython-711c80bfca5dd17cb7c6ec26f0e44848b33aec04.tar.gz
cpython-711c80bfca5dd17cb7c6ec26f0e44848b33aec04.zip
gh-118164: Break a loop between _pydecimal and _pylong and optimize int to str conversion (GH-118483)
For converting large ints to strings, CPython invokes a function in _pylong.py, which uses the decimal module to implement an asymptotically waaaaay sub-quadratic algorithm. But if the C decimal module isn't available, CPython uses _pydecimal.py instead. Which in turn frequently does str(int). If the int is very large, _pylong ends up doing the work, which in turn asks decimal to do "big" arithmetic, which in turn calls str(big_int), which in turn ... it can become infinite mutual recursion. This change introduces a different int->str function that doesn't use decimal. It's asymptotically worse, "Karatsuba time" instead of quadratic time, so still a huge improvement. _pylong switches to that when the C decimal isn't available. It is also used for not too large integers (less than 450_000 bits), where it is faster (up to 2 times for 30_000 bits) than the asymptotically better implementation that uses the C decimal. Co-authored-by: Tim Peters <tim.peters@gmail.com>
Diffstat (limited to 'Python/fileutils.c')
0 files changed, 0 insertions, 0 deletions