aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/Doc/library
diff options
context:
space:
mode:
Diffstat (limited to 'Doc/library')
-rw-r--r--Doc/library/archiving.rst4
-rw-r--r--Doc/library/asyncio-eventloop.rst25
-rw-r--r--Doc/library/asyncio-task.rst21
-rw-r--r--Doc/library/compression.rst18
-rw-r--r--Doc/library/compression.zstd.rst840
-rw-r--r--Doc/library/curses.rst2
-rw-r--r--Doc/library/functions.rst76
-rw-r--r--Doc/library/hashlib.rst2
-rw-r--r--Doc/library/importlib.resources.abc.rst48
-rw-r--r--Doc/library/io.rst85
-rw-r--r--Doc/library/pdb.rst20
-rw-r--r--Doc/library/signal.rst4
-rw-r--r--Doc/library/socket.rst10
-rw-r--r--Doc/library/stdtypes.rst22
14 files changed, 1051 insertions, 126 deletions
diff --git a/Doc/library/archiving.rst b/Doc/library/archiving.rst
index c9284949af4..da0b3f8c3e7 100644
--- a/Doc/library/archiving.rst
+++ b/Doc/library/archiving.rst
@@ -5,13 +5,15 @@ Data Compression and Archiving
******************************
The modules described in this chapter support data compression with the zlib,
-gzip, bzip2 and lzma algorithms, and the creation of ZIP- and tar-format
+gzip, bzip2, lzma, and zstd algorithms, and the creation of ZIP- and tar-format
archives. See also :ref:`archiving-operations` provided by the :mod:`shutil`
module.
.. toctree::
+ compression.rst
+ compression.zstd.rst
zlib.rst
gzip.rst
bz2.rst
diff --git a/Doc/library/asyncio-eventloop.rst b/Doc/library/asyncio-eventloop.rst
index 21f7d0547af..91970c28239 100644
--- a/Doc/library/asyncio-eventloop.rst
+++ b/Doc/library/asyncio-eventloop.rst
@@ -361,7 +361,7 @@ Creating Futures and Tasks
.. versionadded:: 3.5.2
-.. method:: loop.create_task(coro, *, name=None, context=None, eager_start=None)
+.. method:: loop.create_task(coro, *, name=None, context=None, eager_start=None, **kwargs)
Schedule the execution of :ref:`coroutine <coroutine>` *coro*.
Return a :class:`Task` object.
@@ -370,6 +370,10 @@ Creating Futures and Tasks
for interoperability. In this case, the result type is a subclass
of :class:`Task`.
+ The full function signature is largely the same as that of the
+ :class:`Task` constructor (or factory) - all of the keyword arguments to
+ this function are passed through to that interface.
+
If the *name* argument is provided and not ``None``, it is set as
the name of the task using :meth:`Task.set_name`.
@@ -388,8 +392,15 @@ Creating Futures and Tasks
.. versionchanged:: 3.11
Added the *context* parameter.
+ .. versionchanged:: 3.13.3
+ Added ``kwargs`` which passes on arbitrary extra parameters, including ``name`` and ``context``.
+
+ .. versionchanged:: 3.13.4
+ Rolled back the change that passes on *name* and *context* (if it is None),
+ while still passing on other arbitrary keyword arguments (to avoid breaking backwards compatibility with 3.13.3).
+
.. versionchanged:: 3.14
- Added the *eager_start* parameter.
+ All *kwargs* are now passed on. The *eager_start* parameter works with eager task factories.
.. method:: loop.set_task_factory(factory)
@@ -402,6 +413,16 @@ Creating Futures and Tasks
event loop, and *coro* is a coroutine object. The callable
must pass on all *kwargs*, and return a :class:`asyncio.Task`-compatible object.
+ .. versionchanged:: 3.13.3
+ Required that all *kwargs* are passed on to :class:`asyncio.Task`.
+
+ .. versionchanged:: 3.13.4
+ *name* is no longer passed to task factories. *context* is no longer passed
+ to task factories if it is ``None``.
+
+ .. versionchanged:: 3.14
+ *name* and *context* are now unconditionally passed on to task factories again.
+
.. method:: loop.get_task_factory()
Return a task factory or ``None`` if the default one is in use.
diff --git a/Doc/library/asyncio-task.rst b/Doc/library/asyncio-task.rst
index 59acce1990a..b19ffa8213a 100644
--- a/Doc/library/asyncio-task.rst
+++ b/Doc/library/asyncio-task.rst
@@ -238,18 +238,24 @@ Creating Tasks
-----------------------------------------------
-.. function:: create_task(coro, *, name=None, context=None)
+.. function:: create_task(coro, *, name=None, context=None, eager_start=None, **kwargs)
Wrap the *coro* :ref:`coroutine <coroutine>` into a :class:`Task`
and schedule its execution. Return the Task object.
- If *name* is not ``None``, it is set as the name of the task using
- :meth:`Task.set_name`.
+ The full function signature is largely the same as that of the
+ :class:`Task` constructor (or factory) - all of the keyword arguments to
+ this function are passed through to that interface.
An optional keyword-only *context* argument allows specifying a
custom :class:`contextvars.Context` for the *coro* to run in.
The current context copy is created when no *context* is provided.
+ An optional keyword-only *eager_start* argument allows specifying
+ if the task should execute eagerly during the call to create_task,
+ or be scheduled later. If *eager_start* is not passed the mode set
+ by :meth:`loop.set_task_factory` will be used.
+
The task is executed in the loop returned by :func:`get_running_loop`,
:exc:`RuntimeError` is raised if there is no running loop in
current thread.
@@ -290,6 +296,9 @@ Creating Tasks
.. versionchanged:: 3.11
Added the *context* parameter.
+ .. versionchanged:: 3.14
+ Added the *eager_start* parameter by passing on all *kwargs*.
+
Task Cancellation
=================
@@ -330,7 +339,7 @@ and reliable way to wait for all tasks in the group to finish.
.. versionadded:: 3.11
- .. method:: create_task(coro, *, name=None, context=None)
+ .. method:: create_task(coro, *, name=None, context=None, eager_start=None, **kwargs)
Create a task in this task group.
The signature matches that of :func:`asyncio.create_task`.
@@ -342,6 +351,10 @@ and reliable way to wait for all tasks in the group to finish.
Close the given coroutine if the task group is not active.
+ .. versionchanged:: 3.14
+
+ Passes on all *kwargs* to :meth:`loop.create_task`
+
Example::
async def main():
diff --git a/Doc/library/compression.rst b/Doc/library/compression.rst
new file mode 100644
index 00000000000..618b4a3c2bd
--- /dev/null
+++ b/Doc/library/compression.rst
@@ -0,0 +1,18 @@
+The :mod:`!compression` package
+===============================
+
+.. versionadded:: 3.14
+
+The :mod:`!compression` package contains the canonical compression modules
+containing interfaces to several different compression algorithms. Some of
+these modules have historically been available as separate modules; those will
+continue to be available under their original names for compatibility reasons,
+and will not be removed without a deprecation cycle. The use of modules in
+:mod:`!compression` is encouraged where practical.
+
+* :mod:`!compression.bz2` -- Re-exports :mod:`bz2`
+* :mod:`!compression.gzip` -- Re-exports :mod:`gzip`
+* :mod:`!compression.lzma` -- Re-exports :mod:`lzma`
+* :mod:`!compression.zlib` -- Re-exports :mod:`zlib`
+* :mod:`compression.zstd` -- Wrapper for the Zstandard compression library
+
diff --git a/Doc/library/compression.zstd.rst b/Doc/library/compression.zstd.rst
new file mode 100644
index 00000000000..1e1802155a1
--- /dev/null
+++ b/Doc/library/compression.zstd.rst
@@ -0,0 +1,840 @@
+:mod:`!compression.zstd` --- Compression compatible with the Zstandard format
+=============================================================================
+
+.. module:: compression.zstd
+ :synopsis: Low-level interface to compression and decompression routines in
+ the zstd library.
+
+.. versionadded:: 3.14
+
+**Source code:** :source:`Lib/compression/zstd/__init__.py`
+
+--------------
+
+This module provides classes and functions for compressing and decompressing
+data using the Zstandard (or *zstd*) compression algorithm. The
+`zstd manual <https://facebook.github.io/zstd/doc/api_manual_latest.html>`__
+describes Zstandard as "a fast lossless compression algorithm, targeting
+real-time compression scenarios at zlib-level and better compression ratios."
+Also included is a file interface that supports reading and writing the
+contents of ``.zst`` files created by the :program:`zstd` utility, as well as
+raw zstd compressed streams.
+
+The :mod:`!compression.zstd` module contains:
+
+* The :func:`.open` function and :class:`ZstdFile` class for reading and
+ writing compressed files.
+* The :class:`ZstdCompressor` and :class:`ZstdDecompressor` classes for
+ incremental (de)compression.
+* The :func:`compress` and :func:`decompress` functions for one-shot
+ (de)compression.
+* The :func:`train_dict` and :func:`finalize_dict` functions and the
+ :class:`ZstdDict` class to train and manage Zstandard dictionaries.
+* The :class:`CompressionParameter`, :class:`DecompressionParameter`, and
+ :class:`Strategy` classes for setting advanced (de)compression parameters.
+
+
+Exceptions
+----------
+
+.. exception:: ZstdError
+
+ This exception is raised when an error occurs during compression or
+ decompression, or while initializing the (de)compressor state.
+
+
+Reading and writing compressed files
+------------------------------------
+
+.. function:: open(file, /, mode='rb', *, level=None, options=None, \
+ zstd_dict=None, encoding=None, errors=None, newline=None)
+
+ Open a Zstandard-compressed file in binary or text mode, returning a
+ :term:`file object`.
+
+ The *file* argument can be either a file name (given as a
+ :class:`str`, :class:`bytes` or :term:`path-like <path-like object>`
+ object), in which case the named file is opened, or it can be an existing
+ file object to read from or write to.
+
+ The mode argument can be either ``'rb'`` for reading (default), ``'wb'`` for
+ overwriting, ``'ab'`` for appending, or ``'xb'`` for exclusive creation.
+ These can equivalently be given as ``'r'``, ``'w'``, ``'a'``, and ``'x'``
+ respectively. You may also open in text mode with ``'rt'``, ``'wt'``,
+ ``'at'``, and ``'xt'`` respectively.
+
+ When reading, the *options* argument can be a dictionary providing advanced
+ decompression parameters; see :class:`DecompressionParameter` for detailed
+ information about supported
+ parameters. The *zstd_dict* argument is a :class:`ZstdDict` instance to be
+ used during decompression. When reading, if the *level*
+ argument is not None, a :exc:`!TypeError` will be raised.
+
+ When writing, the *options* argument can be a dictionary
+ providing advanced decompression parameters; see
+ :class:`CompressionParameter` for detailed information about supported
+ parameters. The *level* argument is the compression level to use when
+ writing compressed data. Only one of *level* or *options* may be non-None.
+ The *zstd_dict* argument is a :class:`ZstdDict` instance to be used during
+ compression.
+
+ In binary mode, this function is equivalent to the :class:`ZstdFile`
+ constructor: ``ZstdFile(file, mode, ...)``. In this case, the
+ *encoding*, *errors*, and *newline* parameters must not be provided.
+
+ In text mode, a :class:`ZstdFile` object is created, and wrapped in an
+ :class:`io.TextIOWrapper` instance with the specified encoding, error
+ handling behavior, and line endings.
+
+
+.. class:: ZstdFile(file, /, mode='rb', *, level=None, options=None, \
+ zstd_dict=None)
+
+ Open a Zstandard-compressed file in binary mode.
+
+ A :class:`ZstdFile` can wrap an already-open :term:`file object`, or operate
+ directly on a named file. The *file* argument specifies either the file
+ object to wrap, or the name of the file to open (as a :class:`str`,
+ :class:`bytes` or :term:`path-like <path-like object>` object). If
+ wrapping an existing file object, the wrapped file will not be closed when
+ the :class:`ZstdFile` is closed.
+
+ The *mode* argument can be either ``'rb'`` for reading (default), ``'wb'``
+ for overwriting, ``'xb'`` for exclusive creation, or ``'ab'`` for appending.
+ These can equivalently be given as ``'r'``, ``'w'``, ``'x'`` and ``'a'``
+ respectively.
+
+ If *file* is a file object (rather than an actual file name), a mode of
+ ``'w'`` does not truncate the file, and is instead equivalent to ``'a'``.
+
+ When reading, the *options* argument can be a dictionary
+ providing advanced decompression parameters; see
+ :class:`DecompressionParameter` for detailed information about supported
+ parameters. The *zstd_dict* argument is a :class:`ZstdDict` instance to be
+ used during decompression. When reading, if the *level*
+ argument is not None, a :exc:`!TypeError` will be raised.
+
+ When writing, the *options* argument can be a dictionary
+ providing advanced decompression parameters; see
+ :class:`CompressionParameter` for detailed information about supported
+ parameters. The *level* argument is the compression level to use when
+ writing compressed data. Only one of *level* or *options* may be passed. The
+ *zstd_dict* argument is a :class:`ZstdDict` instance to be used during
+ compression.
+
+ :class:`!ZstdFile` supports all the members specified by
+ :class:`io.BufferedIOBase`, except for :meth:`~io.BufferedIOBase.detach`
+ and :meth:`~io.IOBase.truncate`.
+ Iteration and the :keyword:`with` statement are supported.
+
+ The following method and attributes are also provided:
+
+ .. method:: peek(size=-1)
+
+ Return buffered data without advancing the file position. At least one
+ byte of data will be returned, unless EOF has been reached. The exact
+ number of bytes returned is unspecified (the *size* argument is ignored).
+
+ .. note:: While calling :meth:`peek` does not change the file position of
+ the :class:`ZstdFile`, it may change the position of the underlying
+ file object (for example, if the :class:`ZstdFile` was constructed by
+ passing a file object for *file*).
+
+ .. attribute:: mode
+
+ ``'rb'`` for reading and ``'wb'`` for writing.
+
+ .. attribute:: name
+
+ The name of the Zstandard file. Equivalent to the :attr:`~io.FileIO.name`
+ attribute of the underlying :term:`file object`.
+
+
+Compressing and decompressing data in memory
+--------------------------------------------
+
+.. function:: compress(data, level=None, options=None, zstd_dict=None)
+
+ Compress *data* (a :term:`bytes-like object`), returning the compressed
+ data as a :class:`bytes` object.
+
+ The *level* argument is an integer controlling the level of
+ compression. *level* is an alternative to setting
+ :attr:`CompressionParameter.compression_level` in *options*. Use
+ :meth:`~CompressionParameter.bounds` on
+ :attr:`~CompressionParameter.compression_level` to get the values that can
+ be passed for *level*. If advanced compression options are needed, the
+ *level* argument must be omitted and in the *options* dictionary the
+ :attr:`!CompressionParameter.compression_level` parameter should be set.
+
+ The *options* argument is a Python dictionary containing advanced
+ compression parameters. The valid keys and values for compression parameters
+ are documented as part of the :class:`CompressionParameter` documentation.
+
+ The *zstd_dict* argument is an instance of :class:`ZstdDict`
+ containing trained data to improve compression efficiency. The
+ function :func:`train_dict` can be used to generate a Zstandard dictionary.
+
+
+.. function:: decompress(data, zstd_dict=None, options=None)
+
+ Decompress *data* (a :term:`bytes-like object`), returning the uncompressed
+ data as a :class:`bytes` object.
+
+ The *options* argument is a Python dictionary containing advanced
+ decompression parameters. The valid keys and values for compression
+ parameters are documented as part of the :class:`DecompressionParameter`
+ documentation.
+
+ The *zstd_dict* argument is an instance of :class:`ZstdDict`
+ containing trained data used during compression. This must be
+ the same Zstandard dictionary used during compression.
+
+ If *data* is the concatenation of multiple distinct compressed frames,
+ decompress all of these frames, and return the concatenation of the results.
+
+
+.. class:: ZstdCompressor(level=None, options=None, zstd_dict=None)
+
+ Create a compressor object, which can be used to compress data
+ incrementally.
+
+ For a more convenient way of compressing a single chunk of data, see the
+ module-level function :func:`compress`.
+
+ The *level* argument is an integer controlling the level of
+ compression. *level* is an alternative to setting
+ :attr:`CompressionParameter.compression_level` in *options*. Use
+ :meth:`~CompressionParameter.bounds` on
+ :attr:`~CompressionParameter.compression_level` to get the values that can
+ be passed for *level*. If advanced compression options are needed, the
+ *level* argument must be omitted and in the *options* dictionary the
+ :attr:`!CompressionParameter.compression_level` parameter should be set.
+
+ The *options* argument is a Python dictionary containing advanced
+ compression parameters. The valid keys and values for compression parameters
+ are documented as part of the :class:`CompressionParameter` documentation.
+
+ The *zstd_dict* argument is an optional instance of :class:`ZstdDict`
+ containing trained data to improve compression efficiency. The
+ function :func:`train_dict` can be used to generate a Zstandard dictionary.
+
+
+ .. method:: compress(data, mode=ZstdCompressor.CONTINUE)
+
+ Compress *data* (a :term:`bytes-like object`), returning a :class:`bytes`
+ object with compressed data if possible, or otherwise an empty
+ :class:`!bytes` object. Some of *data* may be buffered internally, for
+ use in later calls to :meth:`!compress` and :meth:`~.flush`. The returned
+ data should be concatenated with the output of any previous calls to
+ :meth:`~.compress`.
+
+ The *mode* argument is a :class:`ZstdCompressor` attribute, either
+ :attr:`~.CONTINUE`, :attr:`~.FLUSH_BLOCK`,
+ or :attr:`~.FLUSH_FRAME`.
+
+ When all data has been provided to the compressor, call the
+ :meth:`~.flush` method to finish the compression process. If
+ :meth:`~.compress` is called with *mode* set to :attr:`~.FLUSH_FRAME`,
+ :meth:`~.flush` should not be called, as it would write out a new empty
+ frame.
+
+ .. method:: flush(mode=ZstdCompressor.FLUSH_FRAME)
+
+ Finish the compression process, returning a :class:`bytes` object
+ containing any data stored in the compressor's internal buffers.
+
+ The *mode* argument is a :class:`ZstdCompressor` attribute, either
+ :attr:`~.FLUSH_BLOCK`, or :attr:`~.FLUSH_FRAME`.
+
+ .. attribute:: CONTINUE
+
+ Collect more data for compression, which may or may not generate output
+ immediately. This mode optimizes the compression ratio by maximizing the
+ amount of data per block and frame.
+
+ .. attribute:: FLUSH_BLOCK
+
+ Complete and write a block to the data stream. The data returned so far
+ can be immediately decompressed. Past data can still be referenced in
+ future blocks generated by calls to :meth:`~.compress`,
+ improving compression.
+
+ .. attribute:: FLUSH_FRAME
+
+ Complete and write out a frame. Future data provided to
+ :meth:`~.compress` will be written into a new frame and
+ *cannot* reference past data.
+
+
+.. class:: ZstdDecompressor(zstd_dict=None, options=None)
+
+ Create a decompressor object, which can be used to decompress data
+ incrementally.
+
+ For a more convenient way of decompressing an entire compressed stream at
+ once, see the module-level function :func:`decompress`.
+
+ The *options* argument is a Python dictionary containing advanced
+ decompression parameters. The valid keys and values for compression
+ parameters are documented as part of the :class:`DecompressionParameter`
+ documentation.
+
+ The *zstd_dict* argument is an instance of :class:`ZstdDict`
+ containing trained data used during compression. This must be
+ the same Zstandard dictionary used during compression.
+
+ .. note::
+ This class does not transparently handle inputs containing multiple
+ compressed frames, unlike the :func:`decompress` function and
+ :class:`ZstdFile` class. To decompress a multi-frame input, you should
+ use :func:`decompress`, :class:`ZstdFile` if working with a
+ :term:`file object`, or multiple :class:`!ZstdDecompressor` instances.
+
+ .. method:: decompress(data, max_length=-1)
+
+ Decompress *data* (a :term:`bytes-like object`), returning
+ uncompressed data as bytes. Some of *data* may be buffered
+ internally, for use in later calls to :meth:`!decompress`.
+ The returned data should be concatenated with the output of any previous
+ calls to :meth:`!decompress`.
+
+ If *max_length* is non-negative, the method returns at most *max_length*
+ bytes of decompressed data. If this limit is reached and further
+ output can be produced, the :attr:`~.needs_input` attribute will
+ be set to ``False``. In this case, the next call to
+ :meth:`~.decompress` may provide *data* as ``b''`` to obtain
+ more of the output.
+
+ If all of the input data was decompressed and returned (either
+ because this was less than *max_length* bytes, or because
+ *max_length* was negative), the :attr:`~.needs_input` attribute
+ will be set to ``True``.
+
+ Attempting to decompress data after the end of a frame will raise a
+ :exc:`ZstdError`. Any data found after the end of the frame is ignored
+ and saved in the :attr:`~.unused_data` attribute.
+
+ .. attribute:: eof
+
+ ``True`` if the end-of-stream marker has been reached.
+
+ .. attribute:: unused_data
+
+ Data found after the end of the compressed stream.
+
+ Before the end of the stream is reached, this will be ``b''``.
+
+ .. attribute:: needs_input
+
+ ``False`` if the :meth:`.decompress` method can provide more
+ decompressed data before requiring new compressed input.
+
+
+Zstandard dictionaries
+----------------------
+
+
+.. function:: train_dict(samples, dict_size)
+
+ Train a Zstandard dictionary, returning a :class:`ZstdDict` instance.
+ Zstandard dictionaries enable more efficient compression of smaller sizes
+ of data, which is traditionally difficult to compress due to less
+ repetition. If you are compressing multiple similar groups of data (such as
+ similar files), Zstandard dictionaries can improve compression ratios and
+ speed significantly.
+
+ The *samples* argument (an iterable of :class:`bytes` objects), is the
+ population of samples used to train the Zstandard dictionary.
+
+ The *dict_size* argument, an integer, is the maximum size (in bytes) the
+ Zstandard dictionary should be. The Zstandard documentation suggests an
+ absolute maximum of no more than 100 KB, but the maximum can often be smaller
+ depending on the data. Larger dictionaries generally slow down compression,
+ but improve compression ratios. Smaller dictionaries lead to faster
+ compression, but reduce the compression ratio.
+
+
+.. function:: finalize_dict(zstd_dict, /, samples, dict_size, level)
+
+ An advanced function for converting a "raw content" Zstandard dictionary into
+ a regular Zstandard dictionary. "Raw content" dictionaries are a sequence of
+ bytes that do not need to follow the structure of a normal Zstandard
+ dictionary.
+
+ The *zstd_dict* argument is a :class:`ZstdDict` instance with
+ the :attr:`~ZstdDict.dict_content` containing the raw dictionary contents.
+
+ The *samples* argument (an iterable of :class:`bytes` objects), contains
+ sample data for generating the Zstandard dictionary.
+
+ The *dict_size* argument, an integer, is the maximum size (in bytes) the
+ Zstandard dictionary should be. See :func:`train_dict` for
+ suggestions on the maximum dictionary size.
+
+ The *level* argument (an integer) is the compression level expected to be
+ passed to the compressors using this dictionary. The dictionary information
+ varies for each compression level, so tuning for the proper compression
+ level can make compression more efficient.
+
+
+.. class:: ZstdDict(dict_content, /, *, is_raw=False)
+
+ A wrapper around Zstandard dictionaries. Dictionaries can be used to improve
+ the compression of many small chunks of data. Use :func:`train_dict` if you
+ need to train a new dictionary from sample data.
+
+ The *dict_content* argument (a :term:`bytes-like object`), is the already
+ trained dictionary information.
+
+ The *is_raw* argument, a boolean, is an advanced parameter controlling the
+ meaning of *dict_content*. ``True`` means *dict_content* is a "raw content"
+ dictionary, without any format restrictions. ``False`` means *dict_content*
+ is an ordinary Zstandard dictionary, created from Zstandard functions,
+ for example, :func:`train_dict` or the external :program:`zstd` CLI.
+
+ When passing a :class:`!ZstdDict` to a function, the
+ :attr:`!as_digested_dict` and :attr:`!as_undigested_dict` attributes can
+ control how the dictionary is loaded by passing them as the ``zstd_dict``
+ argument, for example, ``compress(data, zstd_dict=zd.as_digested_dict)``.
+ Digesting a dictionary is a costly operation that occurs when loading a
+ Zstandard dictionary. When making multiple calls to compression or
+ decompression, passing a digested dictionary will reduce the overhead of
+ loading the dictionary.
+
+ .. list-table:: Difference for compression
+ :widths: 10 14 10
+ :header-rows: 1
+
+ * -
+ - Digested dictionary
+ - Undigested dictionary
+ * - Advanced parameters of the compressor which may be overridden by
+ the dictionary's parameters
+ - ``window_log``, ``hash_log``, ``chain_log``, ``search_log``,
+ ``min_match``, ``target_length``, ``strategy``,
+ ``enable_long_distance_matching``, ``ldm_hash_log``,
+ ``ldm_min_match``, ``ldm_bucket_size_log``, ``ldm_hash_rate_log``,
+ and some non-public parameters.
+ - None
+ * - :class:`!ZstdDict` internally caches the dictionary
+ - Yes. It's faster when loading a digested dictionary again with the
+ same compression level.
+ - No. If you wish to load an undigested dictionary multiple times,
+ consider reusing a compressor object.
+
+ If passing a :class:`!ZstdDict` without any attribute, an undigested
+ dictionary is passed by default when compressing and a digested dictionary
+ is generated if necessary and passed by default when decompressing.
+
+ .. attribute:: dict_content
+
+ The content of the Zstandard dictionary, a ``bytes`` object. It's the
+ same as the *dict_content* argument in the ``__init__`` method. It can
+ be used with other programs, such as the ``zstd`` CLI program.
+
+ .. attribute:: dict_id
+
+ Identifier of the Zstandard dictionary, a non-negative int value.
+
+ Non-zero means the dictionary is ordinary, created by Zstandard
+ functions and following the Zstandard format.
+
+ ``0`` means a "raw content" dictionary, free of any format restriction,
+ used for advanced users.
+
+ .. note::
+
+ The meaning of ``0`` for :attr:`!ZstdDict.dict_id` is different
+ from the ``dictionary_id`` attribute to the :func:`get_frame_info`
+ function.
+
+ .. attribute:: as_digested_dict
+
+ Load as a digested dictionary.
+
+ .. attribute:: as_undigested_dict
+
+ Load as an undigested dictionary.
+
+
+Advanced parameter control
+--------------------------
+
+.. class:: CompressionParameter()
+
+ An :class:`~enum.IntEnum` containing the advanced compression parameter
+ keys that can be used when compressing data.
+
+ The :meth:`~.bounds` method can be used on any attribute to get the valid
+ values for that parameter.
+
+ Parameters are optional; any omitted parameter will have it's value selected
+ automatically.
+
+ Example getting the lower and upper bound of :attr:`~.compression_level`::
+
+ lower, upper = CompressionParameter.compression_level.bounds()
+
+ Example setting the :attr:`~.window_log` to the maximum size::
+
+ _lower, upper = CompressionParameter.window_log.bounds()
+ options = {CompressionParameter.window_log: upper}
+ compress(b'venezuelan beaver cheese', options=options)
+
+ .. method:: bounds()
+
+ Return the tuple of int bounds, ``(lower, upper)``, of a compression
+ parameter. This method should be called on the attribute you wish to
+ retrieve the bounds of. For example, to get the valid values for
+ :attr:`~.compression_level`, one may check the result of
+ ``CompressionParameter.compression_level.bounds()``.
+
+ Both the lower and upper bounds are inclusive.
+
+ .. attribute:: compression_level
+
+ A high-level means of setting other compression parameters that affect
+ the speed and ratio of compressing data. Setting the level to zero uses
+ :attr:`COMPRESSION_LEVEL_DEFAULT`.
+
+ .. attribute:: window_log
+
+ Maximum allowed back-reference distance the compressor can use when
+ compressing data, expressed as power of two, ``1 << window_log`` bytes.
+ This parameter greatly influences the memory usage of compression. Higher
+ values require more memory but gain better compression values.
+
+ A value of zero causes the value to be selected automatically.
+
+ .. attribute:: hash_log
+
+ Size of the initial probe table, as a power of two. The resulting memory
+ usage is ``1 << (hash_log+2)`` bytes. Larger tables improve compression
+ ratio of strategies <= :attr:`~Strategy.dfast`, and improve compression
+ speed of strategies > :attr:`~Strategy.dfast`.
+
+ A value of zero causes the value to be selected automatically.
+
+ .. attribute:: chain_log
+
+ Size of the multi-probe search table, as a power of two. The resulting
+ memory usage is ``1 << (chain_log+2)`` bytes. Larger tables result in
+ better and slower compression. This parameter has no effect for the
+ :attr:`~Strategy.fast` strategy. It's still useful when using
+ :attr:`~Strategy.dfast` strategy, in which case it defines a secondary
+ probe table.
+
+ A value of zero causes the value to be selected automatically.
+
+ .. attribute:: search_log
+
+ Number of search attempts, as a power of two. More attempts result in
+ better and slower compression. This parameter is useless for
+ :attr:`~Strategy.fast` and :attr:`~Strategy.dfast` strategies.
+
+ A value of zero causes the value to be selected automatically.
+
+ .. attribute:: min_match
+
+ Minimum size of searched matches. Larger values increase compression and
+ decompression speed, but decrease ratio. Note that Zstandard can still
+ find matches of smaller size, it just tweaks its search algorithm to look
+ for this size and larger. For all strategies < :attr:`~Strategy.btopt`,
+ the effective minimum is ``4``; for all strategies
+ > :attr:`~Strategy.fast`, the effective maximum is ``6``.
+
+ A value of zero causes the value to be selected automatically.
+
+ .. attribute:: target_length
+
+ The impact of this field depends on the selected :class:`Strategy`.
+
+ For strategies :attr:`~Strategy.btopt`, :attr:`~Strategy.btultra` and
+ :attr:`~Strategy.btultra2`, the value is the length of a match
+ considered "good enough" to stop searching. Larger values make
+ compression ratios better, but compresses slower.
+
+ For strategy :attr:`~Strategy.fast`, it is the distance between match
+ sampling. Larger values make compression faster, but with a worse
+ compression ratio.
+
+ A value of zero causes the value to be selected automatically.
+
+ .. attribute:: strategy
+
+ The higher the value of selected strategy, the more complex the
+ compression technique used by zstd, resulting in higher compression
+ ratios but slower compression.
+
+ .. seealso:: :class:`Strategy`
+
+ .. attribute:: enable_long_distance_matching
+
+ Long distance matching can be used to improve compression for large
+ inputs by finding large matches at greater distances. It increases memory
+ usage and window size.
+
+ ``True`` or ``1`` enable long distance matching while ``False`` or ``0``
+ disable it.
+
+ Enabling this parameter increases default
+ :attr:`~CompressionParameter.window_log` to 128 MiB except when expressly
+ set to a different value. This setting is enabled by default if
+ :attr:`!window_log` >= 128 MiB and the compression
+ strategy >= :attr:`~Strategy.btopt` (compression level 16+).
+
+ .. attribute:: ldm_hash_log
+
+ Size of the table for long distance matching, as a power of two. Larger
+ values increase memory usage and compression ratio, but decrease
+ compression speed.
+
+ A value of zero causes the value to be selected automatically.
+
+ .. attribute:: ldm_min_match
+
+ Minimum match size for long distance matcher. Larger or too small values
+ can often decrease the compression ratio.
+
+ A value of zero causes the value to be selected automatically.
+
+ .. attribute:: ldm_bucket_size_log
+
+ Log size of each bucket in the long distance matcher hash table for
+ collision resolution. Larger values improve collision resolution but
+ decrease compression speed.
+
+ A value of zero causes the value to be selected automatically.
+
+ .. attribute:: ldm_hash_rate_log
+
+ Frequency of inserting/looking up entries into the long distance matcher
+ hash table. Larger values improve compression speed. Deviating far from
+ the default value will likely result in a compression ratio decrease.
+
+ A value of zero causes the value to be selected automatically.
+
+ .. attribute:: checksum_flag
+
+ A four-byte checksum using XXHash64 of the uncompressed content is
+ written at the end of each frame. Zstandard's decompression code verifies
+ the checksum. If there is a mismatch a :class:`ZstdError` exception is
+ raised.
+
+ ``True`` or ``1`` enable checksum generation while ``False`` or ``0``
+ disable it.
+
+ .. attribute:: dict_id_flag
+
+ When compressing with a :class:`ZstdDict`, the dictionary's ID is written
+ into the frame header.
+
+ ``True`` or ``1`` enable storing the dictionary ID while ``False`` or
+ ``0`` disable it.
+
+ .. attribute:: nb_workers
+
+ Select how many threads will be spawned to compress in parallel. When
+ :attr:`!nb_workers` > 0, enables multi-threaded compression, a value of
+ ``1`` means "one-thread multi-threaded mode". More workers improve speed,
+ but also increase memory usage and slightly reduce compression ratio.
+
+ A value of zero disables multi-threading.
+
+ .. attribute:: job_size
+
+ Size of a compression job, in bytes. This value is enforced only when
+ :attr:`~CompressionParameter.nb_workers` >= 1. Each compression job is
+ completed in parallel, so this value can indirectly impact the number of
+ active threads.
+
+ A value of zero causes the value to be selected automatically.
+
+ .. attribute:: overlap_log
+
+ Sets how much data is reloaded from previous jobs (threads) for new jobs
+ to be used by the look behind window during compression. This value is
+ only used when :attr:`~CompressionParameter.nb_workers` >= 1. Acceptable
+ values vary from 0 to 9.
+
+ * 0 means dynamically set the overlap amount
+ * 1 means no overlap
+ * 9 means use a full window size from the previous job
+
+ Each increment halves/doubles the overlap size. "8" means an overlap of
+ ``window_size/2``, "7" means an overlap of ``window_size/4``, etc.
+
+.. class:: DecompressionParameter()
+
+ An :class:`~enum.IntEnum` containing the advanced decompression parameter
+ keys that can be used when decompressing data. Parameters are optional; any
+ omitted parameter will have it's value selected automatically.
+
+ The :meth:`~.bounds` method can be used on any attribute to get the valid
+ values for that parameter.
+
+ Example setting the :attr:`~.window_log_max` to the maximum size::
+
+ data = compress(b'Some very long buffer of bytes...')
+
+ _lower, upper = DecompressionParameter.window_log_max.bounds()
+
+ options = {DecompressionParameter.window_log_max: upper}
+ decompress(data, options=options)
+
+ .. method:: bounds()
+
+ Return the tuple of int bounds, ``(lower, upper)``, of a decompression
+ parameter. This method should be called on the attribute you wish to
+ retrieve the bounds of.
+
+ Both the lower and upper bounds are inclusive.
+
+ .. attribute:: window_log_max
+
+ The base-two logarithm of the maximum size of the window used during
+ decompression. This can be useful to limit the amount of memory used when
+ decompressing data. A larger maximum window size leads to faster
+ decompression.
+
+ A value of zero causes the value to be selected automatically.
+
+
+.. class:: Strategy()
+
+ An :class:`~enum.IntEnum` containing strategies for compression.
+ Higher-numbered strategies correspond to more complex and slower
+ compression.
+
+ .. note::
+
+ The values of attributes of :class:`!Strategy` are not necessarily stable
+ across zstd versions. Only the ordering of the attributes may be relied
+ upon. The attributes are listed below in order.
+
+ The following strategies are available:
+
+ .. attribute:: fast
+
+ .. attribute:: dfast
+
+ .. attribute:: greedy
+
+ .. attribute:: lazy
+
+ .. attribute:: lazy2
+
+ .. attribute:: btlazy2
+
+ .. attribute:: btopt
+
+ .. attribute:: btultra
+
+ .. attribute:: btultra2
+
+
+Miscellaneous
+-------------
+
+.. function:: get_frame_info(frame_buffer)
+
+ Retrieve a :class:`FrameInfo` object containing metadata about a Zstandard
+ frame. Frames contain metadata related to the compressed data they hold.
+
+
+.. class:: FrameInfo
+
+ Metadata related to a Zstandard frame.
+
+ .. attribute:: decompressed_size
+
+ The size of the decompressed contents of the frame.
+
+ .. attribute:: dictionary_id
+
+ An integer representing the Zstandard dictionary ID needed for
+ decompressing the frame. ``0`` means the dictionary ID was not
+ recorded in the frame header. This may mean that a Zstandard dictionary
+ is not needed, or that the ID of a required dictionary was not recorded.
+
+
+.. attribute:: COMPRESSION_LEVEL_DEFAULT
+
+ The default compression level for Zstandard: ``3``.
+
+
+.. attribute:: zstd_version_info
+
+ Version number of the runtime zstd library as a tuple of integers
+ (major, minor, release).
+
+
+Examples
+--------
+
+Reading in a compressed file:
+
+.. code-block:: python
+
+ from compression import zstd
+
+ with zstd.open("file.zst") as f:
+ file_content = f.read()
+
+Creating a compressed file:
+
+.. code-block:: python
+
+ from compression import zstd
+
+ data = b"Insert Data Here"
+ with zstd.open("file.zst", "w") as f:
+ f.write(data)
+
+Compressing data in memory:
+
+.. code-block:: python
+
+ from compression import zstd
+
+ data_in = b"Insert Data Here"
+ data_out = zstd.compress(data_in)
+
+Incremental compression:
+
+.. code-block:: python
+
+ from compression import zstd
+
+ comp = zstd.ZstdCompressor()
+ out1 = comp.compress(b"Some data\n")
+ out2 = comp.compress(b"Another piece of data\n")
+ out3 = comp.compress(b"Even more data\n")
+ out4 = comp.flush()
+ # Concatenate all the partial results:
+ result = b"".join([out1, out2, out3, out4])
+
+Writing compressed data to an already-open file:
+
+.. code-block:: python
+
+ from compression import zstd
+
+ with open("myfile", "wb") as f:
+ f.write(b"This data will not be compressed\n")
+ with zstd.open(f, "w") as zstf:
+ zstf.write(b"This *will* be compressed\n")
+ f.write(b"Not compressed\n")
+
+Creating a compressed file using compression parameters:
+
+.. code-block:: python
+
+ from compression import zstd
+
+ options = {
+ zstd.CompressionParameter.checksum_flag: 1
+ }
+ with zstd.open("file.zst", "w", options=options) as f:
+ f.write(b"Mind if I squeeze in?")
diff --git a/Doc/library/curses.rst b/Doc/library/curses.rst
index 5ec23b61396..0b13c559295 100644
--- a/Doc/library/curses.rst
+++ b/Doc/library/curses.rst
@@ -68,7 +68,7 @@ The module :mod:`curses` defines the following exception:
The module :mod:`curses` defines the following functions:
-.. function:: assume_default_colors(fg, bg)
+.. function:: assume_default_colors(fg, bg, /)
Allow use of default values for colors on terminals supporting this feature.
Use this to support transparency in your application.
diff --git a/Doc/library/functions.rst b/Doc/library/functions.rst
index 7e367a0f2b6..2ecce3dba5a 100644
--- a/Doc/library/functions.rst
+++ b/Doc/library/functions.rst
@@ -1154,44 +1154,44 @@ are always available. They are listed here in alphabetical order.
.. function:: locals()
- Return a mapping object representing the current local symbol table, with
- variable names as the keys, and their currently bound references as the
- values.
-
- At module scope, as well as when using :func:`exec` or :func:`eval` with
- a single namespace, this function returns the same namespace as
- :func:`globals`.
-
- At class scope, it returns the namespace that will be passed to the
- metaclass constructor.
-
- When using ``exec()`` or ``eval()`` with separate local and global
- arguments, it returns the local namespace passed in to the function call.
-
- In all of the above cases, each call to ``locals()`` in a given frame of
- execution will return the *same* mapping object. Changes made through
- the mapping object returned from ``locals()`` will be visible as assigned,
- reassigned, or deleted local variables, and assigning, reassigning, or
- deleting local variables will immediately affect the contents of the
- returned mapping object.
-
- In an :term:`optimized scope` (including functions, generators, and
- coroutines), each call to ``locals()`` instead returns a fresh dictionary
- containing the current bindings of the function's local variables and any
- nonlocal cell references. In this case, name binding changes made via the
- returned dict are *not* written back to the corresponding local variables
- or nonlocal cell references, and assigning, reassigning, or deleting local
- variables and nonlocal cell references does *not* affect the contents
- of previously returned dictionaries.
-
- Calling ``locals()`` as part of a comprehension in a function, generator, or
- coroutine is equivalent to calling it in the containing scope, except that
- the comprehension's initialised iteration variables will be included. In
- other scopes, it behaves as if the comprehension were running as a nested
- function.
-
- Calling ``locals()`` as part of a generator expression is equivalent to
- calling it in a nested generator function.
+ Return a mapping object representing the current local symbol table, with
+ variable names as the keys, and their currently bound references as the
+ values.
+
+ At module scope, as well as when using :func:`exec` or :func:`eval` with
+ a single namespace, this function returns the same namespace as
+ :func:`globals`.
+
+ At class scope, it returns the namespace that will be passed to the
+ metaclass constructor.
+
+ When using ``exec()`` or ``eval()`` with separate local and global
+ arguments, it returns the local namespace passed in to the function call.
+
+ In all of the above cases, each call to ``locals()`` in a given frame of
+ execution will return the *same* mapping object. Changes made through
+ the mapping object returned from ``locals()`` will be visible as assigned,
+ reassigned, or deleted local variables, and assigning, reassigning, or
+ deleting local variables will immediately affect the contents of the
+ returned mapping object.
+
+ In an :term:`optimized scope` (including functions, generators, and
+ coroutines), each call to ``locals()`` instead returns a fresh dictionary
+ containing the current bindings of the function's local variables and any
+ nonlocal cell references. In this case, name binding changes made via the
+ returned dict are *not* written back to the corresponding local variables
+ or nonlocal cell references, and assigning, reassigning, or deleting local
+ variables and nonlocal cell references does *not* affect the contents
+ of previously returned dictionaries.
+
+ Calling ``locals()`` as part of a comprehension in a function, generator, or
+ coroutine is equivalent to calling it in the containing scope, except that
+ the comprehension's initialised iteration variables will be included. In
+ other scopes, it behaves as if the comprehension were running as a nested
+ function.
+
+ Calling ``locals()`` as part of a generator expression is equivalent to
+ calling it in a nested generator function.
.. versionchanged:: 3.12
The behaviour of ``locals()`` in a comprehension has been updated as
diff --git a/Doc/library/hashlib.rst b/Doc/library/hashlib.rst
index bb2d2fad23b..4818a4944a5 100644
--- a/Doc/library/hashlib.rst
+++ b/Doc/library/hashlib.rst
@@ -284,7 +284,7 @@ a file or file-like object.
Example:
>>> import io, hashlib, hmac
- >>> with open(hashlib.__file__, "rb") as f:
+ >>> with open("library/hashlib.rst", "rb") as f:
... digest = hashlib.file_digest(f, "sha256")
...
>>> digest.hexdigest() # doctest: +ELLIPSIS
diff --git a/Doc/library/importlib.resources.abc.rst b/Doc/library/importlib.resources.abc.rst
index 7a77466bcba..8253a33f591 100644
--- a/Doc/library/importlib.resources.abc.rst
+++ b/Doc/library/importlib.resources.abc.rst
@@ -49,44 +49,44 @@
.. method:: open_resource(resource)
:abstractmethod:
- Returns an opened, :term:`file-like object` for binary reading
- of the *resource*.
+ Returns an opened, :term:`file-like object` for binary reading
+ of the *resource*.
- If the resource cannot be found, :exc:`FileNotFoundError` is
- raised.
+ If the resource cannot be found, :exc:`FileNotFoundError` is
+ raised.
.. method:: resource_path(resource)
:abstractmethod:
- Returns the file system path to the *resource*.
+ Returns the file system path to the *resource*.
- If the resource does not concretely exist on the file system,
- raise :exc:`FileNotFoundError`.
+ If the resource does not concretely exist on the file system,
+ raise :exc:`FileNotFoundError`.
.. method:: is_resource(name)
:abstractmethod:
- Returns ``True`` if the named *name* is considered a resource.
- :exc:`FileNotFoundError` is raised if *name* does not exist.
+ Returns ``True`` if the named *name* is considered a resource.
+ :exc:`FileNotFoundError` is raised if *name* does not exist.
.. method:: contents()
:abstractmethod:
- Returns an :term:`iterable` of strings over the contents of
- the package. Do note that it is not required that all names
- returned by the iterator be actual resources, e.g. it is
- acceptable to return names for which :meth:`is_resource` would
- be false.
-
- Allowing non-resource names to be returned is to allow for
- situations where how a package and its resources are stored
- are known a priori and the non-resource names would be useful.
- For instance, returning subdirectory names is allowed so that
- when it is known that the package and resources are stored on
- the file system then those subdirectory names can be used
- directly.
-
- The abstract method returns an iterable of no items.
+ Returns an :term:`iterable` of strings over the contents of
+ the package. Do note that it is not required that all names
+ returned by the iterator be actual resources, e.g. it is
+ acceptable to return names for which :meth:`is_resource` would
+ be false.
+
+ Allowing non-resource names to be returned is to allow for
+ situations where how a package and its resources are stored
+ are known a priori and the non-resource names would be useful.
+ For instance, returning subdirectory names is allowed so that
+ when it is known that the package and resources are stored on
+ the file system then those subdirectory names can be used
+ directly.
+
+ The abstract method returns an iterable of no items.
.. class:: Traversable
diff --git a/Doc/library/io.rst b/Doc/library/io.rst
index 08c76da3d8c..de5cab5aee6 100644
--- a/Doc/library/io.rst
+++ b/Doc/library/io.rst
@@ -528,14 +528,13 @@ I/O Base Classes
It inherits from :class:`IOBase`.
The main difference with :class:`RawIOBase` is that methods :meth:`read`,
- :meth:`readinto` and :meth:`write` will try (respectively) to read as much
- input as requested or to consume all given output, at the expense of
- making perhaps more than one system call.
+ :meth:`readinto` and :meth:`write` will try (respectively) to read
+ as much input as requested or to emit all provided data.
- In addition, those methods can raise :exc:`BlockingIOError` if the
- underlying raw stream is in non-blocking mode and cannot take or give
- enough data; unlike their :class:`RawIOBase` counterparts, they will
- never return ``None``.
+ In addition, if the underlying raw stream is in non-blocking mode, when the
+ system returns would block :meth:`write` will raise :exc:`BlockingIOError`
+ with :attr:`BlockingIOError.characters_written` and :meth:`read` will return
+ data read so far or ``None`` if no data is available.
Besides, the :meth:`read` method does not have a default
implementation that defers to :meth:`readinto`.
@@ -568,29 +567,40 @@ I/O Base Classes
.. method:: read(size=-1, /)
- Read and return up to *size* bytes. If the argument is omitted, ``None``,
- or negative, data is read and returned until EOF is reached. An empty
- :class:`bytes` object is returned if the stream is already at EOF.
+ Read and return up to *size* bytes. If the argument is omitted, ``None``,
+ or negative read as much as possible.
- If the argument is positive, and the underlying raw stream is not
- interactive, multiple raw reads may be issued to satisfy the byte count
- (unless EOF is reached first). But for interactive raw streams, at most
- one raw read will be issued, and a short result does not imply that EOF is
- imminent.
+ Fewer bytes may be returned than requested. An empty :class:`bytes` object
+ is returned if the stream is already at EOF. More than one read may be
+ made and calls may be retried if specific errors are encountered, see
+ :meth:`os.read` and :pep:`475` for more details. Less than size bytes
+ being returned does not imply that EOF is imminent.
- A :exc:`BlockingIOError` is raised if the underlying raw stream is in
- non blocking-mode, and has no data available at the moment.
+ When reading as much as possible the default implementation will use
+ ``raw.readall`` if available (which should implement
+ :meth:`RawIOBase.readall`), otherwise will read in a loop until read
+ returns ``None``, an empty :class:`bytes`, or a non-retryable error. For
+ most streams this is to EOF, but for non-blocking streams more data may
+ become available.
+
+ .. note::
+
+ When the underlying raw stream is non-blocking, implementations may
+ either raise :exc:`BlockingIOError` or return ``None`` if no data is
+ available. :mod:`io` implementations return ``None``.
.. method:: read1(size=-1, /)
- Read and return up to *size* bytes, with at most one call to the
- underlying raw stream's :meth:`~RawIOBase.read` (or
- :meth:`~RawIOBase.readinto`) method. This can be useful if you are
- implementing your own buffering on top of a :class:`BufferedIOBase`
- object.
+ Read and return up to *size* bytes, calling :meth:`~RawIOBase.readinto`
+ which may retry if :py:const:`~errno.EINTR` is encountered per
+ :pep:`475`. If *size* is ``-1`` or not provided, the implementation will
+ choose an arbitrary value for *size*.
- If *size* is ``-1`` (the default), an arbitrary number of bytes are
- returned (more than zero unless EOF is reached).
+ .. note::
+
+ When the underlying raw stream is non-blocking, implementations may
+ either raise :exc:`BlockingIOError` or return ``None`` if no data is
+ available. :mod:`io` implementations return ``None``.
.. method:: readinto(b, /)
@@ -767,34 +777,21 @@ than raw I/O does.
.. method:: peek(size=0, /)
- Return bytes from the stream without advancing the position. At most one
- single read on the raw stream is done to satisfy the call. The number of
- bytes returned may be less or more than requested.
+ Return bytes from the stream without advancing the position. The number of
+ bytes returned may be less or more than requested. If the underlying raw
+ stream is non-blocking and the operation would block, returns empty bytes.
.. method:: read(size=-1, /)
- Read and return *size* bytes, or if *size* is not given or negative, until
- EOF or if the read call would block in non-blocking mode.
-
- .. note::
-
- When the underlying raw stream is non-blocking, a :exc:`BlockingIOError`
- may be raised if a read operation cannot be completed immediately.
+ In :class:`BufferedReader` this is the same as :meth:`io.BufferedIOBase.read`
.. method:: read1(size=-1, /)
- Read and return up to *size* bytes with only one call on the raw stream.
- If at least one byte is buffered, only buffered bytes are returned.
- Otherwise, one raw stream read call is made.
+ In :class:`BufferedReader` this is the same as :meth:`io.BufferedIOBase.read1`
.. versionchanged:: 3.7
The *size* argument is now optional.
- .. note::
-
- When the underlying raw stream is non-blocking, a :exc:`BlockingIOError`
- may be raised if a read operation cannot be completed immediately.
-
.. class:: BufferedWriter(raw, buffer_size=DEFAULT_BUFFER_SIZE)
A buffered binary stream providing higher-level access to a writeable, non
@@ -826,8 +823,8 @@ than raw I/O does.
Write the :term:`bytes-like object`, *b*, and return the
number of bytes written. When in non-blocking mode, a
- :exc:`BlockingIOError` is raised if the buffer needs to be written out but
- the raw stream blocks.
+ :exc:`BlockingIOError` with :attr:`BlockingIOError.characters_written` set
+ is raised if the buffer needs to be written out but the raw stream blocks.
.. class:: BufferedRandom(raw, buffer_size=DEFAULT_BUFFER_SIZE)
diff --git a/Doc/library/pdb.rst b/Doc/library/pdb.rst
index a0304edddf6..f4b51664545 100644
--- a/Doc/library/pdb.rst
+++ b/Doc/library/pdb.rst
@@ -80,7 +80,7 @@ The debugger's prompt is ``(Pdb)``, which is the indicator that you are in debug
You can also invoke :mod:`pdb` from the command line to debug other scripts. For
example::
- python -m pdb [-c command] (-m module | pyfile) [args ...]
+ python -m pdb [-c command] (-m module | -p pid | pyfile) [args ...]
When invoked as a module, pdb will automatically enter post-mortem debugging if
the program being debugged exits abnormally. After post-mortem debugging (or
@@ -104,6 +104,24 @@ useful than quitting the debugger upon program's exit.
.. versionchanged:: 3.7
Added the ``-m`` option.
+.. option:: -p, --pid <pid>
+
+ Attach to the process with the specified PID.
+
+ .. versionadded:: 3.14
+
+
+To attach to a running Python process for remote debugging, use the ``-p`` or
+``--pid`` option with the target process's PID::
+
+ python -m pdb -p 1234
+
+.. note::
+
+ Attaching to a process that is blocked in a system call or waiting for I/O
+ will only work once the next bytecode instruction is executed or when the
+ process receives a signal.
+
Typical usage to execute a statement under control of the debugger is::
>>> import pdb
diff --git a/Doc/library/signal.rst b/Doc/library/signal.rst
index c28841dbb8c..b0307d3dea1 100644
--- a/Doc/library/signal.rst
+++ b/Doc/library/signal.rst
@@ -211,8 +211,8 @@ The variables defined in the :mod:`signal` module are:
.. data:: SIGSTKFLT
- Stack fault on coprocessor. The Linux kernel does not raise this signal: it
- can only be raised in user space.
+ Stack fault on coprocessor. The Linux kernel does not raise this signal: it
+ can only be raised in user space.
.. availability:: Linux.
diff --git a/Doc/library/socket.rst b/Doc/library/socket.rst
index 8fd5187e3a4..75fd637045d 100644
--- a/Doc/library/socket.rst
+++ b/Doc/library/socket.rst
@@ -362,10 +362,10 @@ Exceptions
Constants
^^^^^^^^^
- The AF_* and SOCK_* constants are now :class:`AddressFamily` and
- :class:`SocketKind` :class:`.IntEnum` collections.
+The AF_* and SOCK_* constants are now :class:`AddressFamily` and
+:class:`SocketKind` :class:`.IntEnum` collections.
- .. versionadded:: 3.4
+.. versionadded:: 3.4
.. data:: AF_UNIX
AF_INET
@@ -773,9 +773,9 @@ Constants
Constant to optimize CPU locality, to be used in conjunction with
:data:`SO_REUSEPORT`.
- .. versionadded:: 3.11
+ .. versionadded:: 3.11
- .. availability:: Linux >= 3.9
+ .. availability:: Linux >= 3.9
.. data:: SO_REUSEPORT_LB
diff --git a/Doc/library/stdtypes.rst b/Doc/library/stdtypes.rst
index 3486a18b5cb..31d71031bca 100644
--- a/Doc/library/stdtypes.rst
+++ b/Doc/library/stdtypes.rst
@@ -1788,8 +1788,14 @@ expression support in the :mod:`re` module).
Return centered in a string of length *width*. Padding is done using the
specified *fillchar* (default is an ASCII space). The original string is
- returned if *width* is less than or equal to ``len(s)``.
+ returned if *width* is less than or equal to ``len(s)``. For example::
+ >>> 'Python'.center(10)
+ ' Python '
+ >>> 'Python'.center(10, '-')
+ '--Python--'
+ >>> 'Python'.center(4)
+ 'Python'
.. method:: str.count(sub[, start[, end]])
@@ -1799,8 +1805,18 @@ expression support in the :mod:`re` module).
interpreted as in slice notation.
If *sub* is empty, returns the number of empty strings between characters
- which is the length of the string plus one.
-
+ which is the length of the string plus one. For example::
+
+ >>> 'spam, spam, spam'.count('spam')
+ 3
+ >>> 'spam, spam, spam'.count('spam', 5)
+ 2
+ >>> 'spam, spam, spam'.count('spam', 5, 10)
+ 1
+ >>> 'spam, spam, spam'.count('eggs')
+ 0
+ >>> 'spam, spam, spam'.count('')
+ 17
.. method:: str.encode(encoding="utf-8", errors="strict")