aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/Python/Python-tokenize.c
diff options
context:
space:
mode:
authorLysandros Nikolaou <lisandrosnik@gmail.com>2023-10-11 17:14:44 +0200
committerGitHub <noreply@github.com>2023-10-11 15:14:44 +0000
commit01481f2dc13341c84b64d6dffc08ffed022712a6 (patch)
tree706f721ed9a7e5fa7e1c6cb3c3026191c7c95475 /Python/Python-tokenize.c
parenteb50cd37eac47dd4dc71ab42d0582dfb6eac4515 (diff)
downloadcpython-01481f2dc13341c84b64d6dffc08ffed022712a6.tar.gz
cpython-01481f2dc13341c84b64d6dffc08ffed022712a6.zip
gh-104169: Refactor tokenizer into lexer and wrappers (#110684)
* The lexer, which include the actual lexeme producing logic, goes into the `lexer` directory. * The wrappers, one wrapper per input mode (file, string, utf-8, and readline), go into the `tokenizer` directory and include logic for creating a lexer instance and managing the buffer for different modes. --------- Co-authored-by: Pablo Galindo <pablogsal@gmail.com> Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Diffstat (limited to 'Python/Python-tokenize.c')
-rw-r--r--Python/Python-tokenize.c4
1 files changed, 3 insertions, 1 deletions
diff --git a/Python/Python-tokenize.c b/Python/Python-tokenize.c
index 1b021069c5e..83b4aa4b1a7 100644
--- a/Python/Python-tokenize.c
+++ b/Python/Python-tokenize.c
@@ -1,6 +1,8 @@
#include "Python.h"
#include "errcode.h"
-#include "../Parser/tokenizer.h"
+#include "../Parser/lexer/state.h"
+#include "../Parser/lexer/lexer.h"
+#include "../Parser/tokenizer/tokenizer.h"
#include "../Parser/pegen.h" // _PyPegen_byte_offset_to_character_offset()
#include "../Parser/pegen.h" // _PyPegen_byte_offset_to_character_offset()