applied-ai-018 commited on
Commit
39b51e5
·
verified ·
1 Parent(s): eabf3b2

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. env-llmeval/lib/python3.10/site-packages/jsonlines-4.0.0.dist-info/INSTALLER +1 -0
  2. env-llmeval/lib/python3.10/site-packages/jsonlines-4.0.0.dist-info/LICENSE.rst +30 -0
  3. env-llmeval/lib/python3.10/site-packages/jsonlines-4.0.0.dist-info/METADATA +48 -0
  4. env-llmeval/lib/python3.10/site-packages/jsonlines-4.0.0.dist-info/RECORD +11 -0
  5. env-llmeval/lib/python3.10/site-packages/jsonlines-4.0.0.dist-info/WHEEL +5 -0
  6. env-llmeval/lib/python3.10/site-packages/jsonlines-4.0.0.dist-info/top_level.txt +1 -0
  7. env-llmeval/lib/python3.10/site-packages/multiprocess/__init__.py +66 -0
  8. env-llmeval/lib/python3.10/site-packages/multiprocess/connection.py +976 -0
  9. env-llmeval/lib/python3.10/site-packages/multiprocess/heap.py +337 -0
  10. env-llmeval/lib/python3.10/site-packages/multiprocess/popen_spawn_posix.py +72 -0
  11. env-llmeval/lib/python3.10/site-packages/multiprocess/process.py +438 -0
  12. env-llmeval/lib/python3.10/site-packages/multiprocess/queues.py +382 -0
  13. env-llmeval/lib/python3.10/site-packages/multiprocess/reduction.py +284 -0
  14. env-llmeval/lib/python3.10/site-packages/multiprocess/resource_tracker.py +242 -0
  15. env-llmeval/lib/python3.10/site-packages/multiprocess/shared_memory.py +534 -0
  16. env-llmeval/lib/python3.10/site-packages/multiprocess/sharedctypes.py +240 -0
  17. env-llmeval/lib/python3.10/site-packages/multiprocess/spawn.py +297 -0
  18. env-llmeval/lib/python3.10/site-packages/multiprocess/synchronize.py +400 -0
  19. env-llmeval/lib/python3.10/site-packages/multiprocess/tests/__init__.py +0 -0
  20. env-llmeval/lib/python3.10/site-packages/multiprocess/tests/__main__.py +34 -0
  21. env-llmeval/lib/python3.10/site-packages/multiprocess/tests/mp_fork_bomb.py +18 -0
  22. env-llmeval/lib/python3.10/site-packages/multiprocess/tests/mp_preload.py +18 -0
  23. env-llmeval/lib/python3.10/site-packages/multiprocess/tests/test_multiprocessing_forkserver.py +16 -0
  24. env-llmeval/lib/python3.10/site-packages/multiprocess/tests/test_multiprocessing_main_handling.py +303 -0
  25. env-llmeval/lib/python3.10/site-packages/multiprocess/tests/test_multiprocessing_spawn.py +12 -0
  26. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/__init__.cpython-310.pyc +0 -0
  27. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/_compute_docstrings.cpython-310.pyc +0 -0
  28. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/_generated_version.cpython-310.pyc +0 -0
  29. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/acero.cpython-310.pyc +0 -0
  30. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/benchmark.cpython-310.pyc +0 -0
  31. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/cffi.cpython-310.pyc +0 -0
  32. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/compute.cpython-310.pyc +0 -0
  33. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/conftest.cpython-310.pyc +0 -0
  34. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/csv.cpython-310.pyc +0 -0
  35. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/cuda.cpython-310.pyc +0 -0
  36. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/dataset.cpython-310.pyc +0 -0
  37. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/feather.cpython-310.pyc +0 -0
  38. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/filesystem.cpython-310.pyc +0 -0
  39. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/flight.cpython-310.pyc +0 -0
  40. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/fs.cpython-310.pyc +0 -0
  41. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/hdfs.cpython-310.pyc +0 -0
  42. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/ipc.cpython-310.pyc +0 -0
  43. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/json.cpython-310.pyc +0 -0
  44. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/jvm.cpython-310.pyc +0 -0
  45. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/orc.cpython-310.pyc +0 -0
  46. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/pandas_compat.cpython-310.pyc +0 -0
  47. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/substrait.cpython-310.pyc +0 -0
  48. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/types.cpython-310.pyc +0 -0
  49. env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/util.cpython-310.pyc +0 -0
  50. env-llmeval/lib/python3.10/site-packages/pyarrow/tests/__pycache__/test_scalars.cpython-310.pyc +0 -0
env-llmeval/lib/python3.10/site-packages/jsonlines-4.0.0.dist-info/INSTALLER ADDED
@@ -0,0 +1 @@
 
 
1
+ pip
env-llmeval/lib/python3.10/site-packages/jsonlines-4.0.0.dist-info/LICENSE.rst ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *(This is the OSI approved 3-clause "New BSD License".)*
2
+
3
+ Copyright © 2016, wouter bolsterlee
4
+
5
+ All rights reserved.
6
+
7
+ Redistribution and use in source and binary forms, with or without
8
+ modification, are permitted provided that the following conditions are met:
9
+
10
+ * Redistributions of source code must retain the above copyright notice, this
11
+ list of conditions and the following disclaimer.
12
+
13
+ * Redistributions in binary form must reproduce the above copyright notice, this
14
+ list of conditions and the following disclaimer in the documentation and/or
15
+ other materials provided with the distribution.
16
+
17
+ * Neither the name of the author nor the names of the contributors may be used
18
+ to endorse or promote products derived from this software without specific
19
+ prior written permission.
20
+
21
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
22
+ ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
23
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
24
+ DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
25
+ FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
26
+ DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
27
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
28
+ CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
29
+ OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
30
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
env-llmeval/lib/python3.10/site-packages/jsonlines-4.0.0.dist-info/METADATA ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: jsonlines
3
+ Version: 4.0.0
4
+ Summary: Library with helpers for the jsonlines file format
5
+ Home-page: https://github.com/wbolster/jsonlines
6
+ Author: wouter bolsterlee
7
+ Author-email: [email protected]
8
+ License: BSD
9
+ Classifier: Development Status :: 5 - Production/Stable
10
+ Classifier: Intended Audience :: Developers
11
+ Classifier: Intended Audience :: System Administrators
12
+ Classifier: License :: OSI Approved :: BSD License
13
+ Classifier: Programming Language :: Python
14
+ Classifier: Programming Language :: Python :: 3
15
+ Classifier: Programming Language :: Python :: 3 :: Only
16
+ Classifier: Topic :: Internet :: Log Analysis
17
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
18
+ Classifier: Topic :: System :: Logging
19
+ Classifier: Topic :: Utilities
20
+ Requires-Python: >=3.8
21
+ License-File: LICENSE.rst
22
+ Requires-Dist: attrs >=19.2.0
23
+
24
+ .. image:: https://pepy.tech/badge/jsonlines
25
+ :target: https://pepy.tech/project/jsonlines
26
+
27
+ .. image:: https://pepy.tech/badge/jsonlines/month
28
+ :target: https://pepy.tech/project/jsonlines
29
+
30
+ .. image:: https://anaconda.org/anaconda/anaconda/badges/installer/conda.svg
31
+ :target: https://anaconda.org/anaconda/jsonlines
32
+
33
+ =========
34
+ jsonlines
35
+ =========
36
+
37
+ ``jsonlines`` is a Python library to simplify working with jsonlines_
38
+ and ndjson_ data.
39
+
40
+ .. _jsonlines: http://jsonlines.org/
41
+ .. _ndjson: http://ndjson.org/
42
+
43
+ * Documentation: https://jsonlines.readthedocs.io/
44
+
45
+ * Python Package Index (PyPI): https://pypi.python.org/pypi/jsonlines/
46
+
47
+ * Source code and issue tracker: https://github.com/wbolster/jsonlines
48
+
env-llmeval/lib/python3.10/site-packages/jsonlines-4.0.0.dist-info/RECORD ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ jsonlines-4.0.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
2
+ jsonlines-4.0.0.dist-info/LICENSE.rst,sha256=vKNU5jkrJCH_sfHiNFRcUVQzuSkTYsG7n9EAkiuQ60I,1543
3
+ jsonlines-4.0.0.dist-info/METADATA,sha256=XDMhu0s_WdlpRSAcseysBZnpSInKa5EEMwyEZ-5ZtHE,1565
4
+ jsonlines-4.0.0.dist-info/RECORD,,
5
+ jsonlines-4.0.0.dist-info/WHEEL,sha256=yQN5g4mg4AybRjkgi-9yy4iQEFibGQmlz78Pik5Or-A,92
6
+ jsonlines-4.0.0.dist-info/top_level.txt,sha256=Y-KWmwRS4_Ci-mje2h6XC6xPeGV191NA6XhnbPot6eE,10
7
+ jsonlines/__init__.py,sha256=7R6ohpIk95mz93rtkWQLSK1_1UQWUd9ckaVxgFyfhsA,258
8
+ jsonlines/__pycache__/__init__.cpython-310.pyc,,
9
+ jsonlines/__pycache__/jsonlines.cpython-310.pyc,,
10
+ jsonlines/jsonlines.py,sha256=PpLVYlWwGiB4UoTVq2hkdp9oJ0ioSO6gW0k8_-P97-w,19895
11
+ jsonlines/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
env-llmeval/lib/python3.10/site-packages/jsonlines-4.0.0.dist-info/WHEEL ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ Wheel-Version: 1.0
2
+ Generator: bdist_wheel (0.41.2)
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
5
+
env-llmeval/lib/python3.10/site-packages/jsonlines-4.0.0.dist-info/top_level.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ jsonlines
env-llmeval/lib/python3.10/site-packages/multiprocess/__init__.py ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Package analogous to 'threading.py' but using processes
3
+ #
4
+ # multiprocessing/__init__.py
5
+ #
6
+ # This package is intended to duplicate the functionality (and much of
7
+ # the API) of threading.py but uses processes instead of threads. A
8
+ # subpackage 'multiprocessing.dummy' has the same API but is a simple
9
+ # wrapper for 'threading'.
10
+ #
11
+ # Original: Copyright (c) 2006-2008, R Oudkerk
12
+ # Original: Licensed to PSF under a Contributor Agreement.
13
+ # Forked by Mike McKerns, to support enhanced serialization.
14
+
15
+ # author, version, license, and long description
16
+ try: # the package is installed
17
+ from .__info__ import __version__, __author__, __doc__, __license__
18
+ except: # pragma: no cover
19
+ import os
20
+ import sys
21
+ root = os.path.dirname(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))
22
+ sys.path.append(root)
23
+ # get distribution meta info
24
+ from version import (__version__, __author__,
25
+ get_license_text, get_readme_as_rst)
26
+ __license__ = get_license_text(os.path.join(root, 'LICENSE'))
27
+ __license__ = "\n%s" % __license__
28
+ __doc__ = get_readme_as_rst(os.path.join(root, 'README.md'))
29
+ del os, sys, root, get_license_text, get_readme_as_rst
30
+
31
+
32
+ import sys
33
+ from . import context
34
+
35
+ #
36
+ # Copy stuff from default context
37
+ #
38
+
39
+ __all__ = [x for x in dir(context._default_context) if not x.startswith('_')]
40
+ globals().update((name, getattr(context._default_context, name)) for name in __all__)
41
+
42
+ #
43
+ # XXX These should not really be documented or public.
44
+ #
45
+
46
+ SUBDEBUG = 5
47
+ SUBWARNING = 25
48
+
49
+ #
50
+ # Alias for main module -- will be reset by bootstrapping child processes
51
+ #
52
+
53
+ if '__main__' in sys.modules:
54
+ sys.modules['__mp_main__'] = sys.modules['__main__']
55
+
56
+
57
+ def license():
58
+ """print license"""
59
+ print (__license__)
60
+ return
61
+
62
+ def citation():
63
+ """print citation"""
64
+ print (__doc__[-491:-118])
65
+ return
66
+
env-llmeval/lib/python3.10/site-packages/multiprocess/connection.py ADDED
@@ -0,0 +1,976 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # A higher level module for using sockets (or Windows named pipes)
3
+ #
4
+ # multiprocessing/connection.py
5
+ #
6
+ # Copyright (c) 2006-2008, R Oudkerk
7
+ # Licensed to PSF under a Contributor Agreement.
8
+ #
9
+
10
+ __all__ = [ 'Client', 'Listener', 'Pipe', 'wait' ]
11
+
12
+ import io
13
+ import os
14
+ import sys
15
+ import socket
16
+ import struct
17
+ import time
18
+ import tempfile
19
+ import itertools
20
+
21
+ try:
22
+ import _multiprocess as _multiprocessing
23
+ except ImportError:
24
+ import _multiprocessing
25
+
26
+ from . import util
27
+
28
+ from . import AuthenticationError, BufferTooShort
29
+ from .context import reduction
30
+ _ForkingPickler = reduction.ForkingPickler
31
+
32
+ try:
33
+ import _winapi
34
+ from _winapi import WAIT_OBJECT_0, WAIT_ABANDONED_0, WAIT_TIMEOUT, INFINITE
35
+ except ImportError:
36
+ if sys.platform == 'win32':
37
+ raise
38
+ _winapi = None
39
+
40
+ #
41
+ #
42
+ #
43
+
44
+ BUFSIZE = 8192
45
+ # A very generous timeout when it comes to local connections...
46
+ CONNECTION_TIMEOUT = 20.
47
+
48
+ _mmap_counter = itertools.count()
49
+
50
+ default_family = 'AF_INET'
51
+ families = ['AF_INET']
52
+
53
+ if hasattr(socket, 'AF_UNIX'):
54
+ default_family = 'AF_UNIX'
55
+ families += ['AF_UNIX']
56
+
57
+ if sys.platform == 'win32':
58
+ default_family = 'AF_PIPE'
59
+ families += ['AF_PIPE']
60
+
61
+
62
+ def _init_timeout(timeout=CONNECTION_TIMEOUT):
63
+ return getattr(time,'monotonic',time.time)() + timeout
64
+
65
+ def _check_timeout(t):
66
+ return getattr(time,'monotonic',time.time)() > t
67
+
68
+ #
69
+ #
70
+ #
71
+
72
+ def arbitrary_address(family):
73
+ '''
74
+ Return an arbitrary free address for the given family
75
+ '''
76
+ if family == 'AF_INET':
77
+ return ('localhost', 0)
78
+ elif family == 'AF_UNIX':
79
+ return tempfile.mktemp(prefix='listener-', dir=util.get_temp_dir())
80
+ elif family == 'AF_PIPE':
81
+ return tempfile.mktemp(prefix=r'\\.\pipe\pyc-%d-%d-' %
82
+ (os.getpid(), next(_mmap_counter)), dir="")
83
+ else:
84
+ raise ValueError('unrecognized family')
85
+
86
+ def _validate_family(family):
87
+ '''
88
+ Checks if the family is valid for the current environment.
89
+ '''
90
+ if sys.platform != 'win32' and family == 'AF_PIPE':
91
+ raise ValueError('Family %s is not recognized.' % family)
92
+
93
+ if sys.platform == 'win32' and family == 'AF_UNIX':
94
+ # double check
95
+ if not hasattr(socket, family):
96
+ raise ValueError('Family %s is not recognized.' % family)
97
+
98
+ def address_type(address):
99
+ '''
100
+ Return the types of the address
101
+
102
+ This can be 'AF_INET', 'AF_UNIX', or 'AF_PIPE'
103
+ '''
104
+ if type(address) == tuple:
105
+ return 'AF_INET'
106
+ elif type(address) is str and address.startswith('\\\\'):
107
+ return 'AF_PIPE'
108
+ elif type(address) is str or util.is_abstract_socket_namespace(address):
109
+ return 'AF_UNIX'
110
+ else:
111
+ raise ValueError('address type of %r unrecognized' % address)
112
+
113
+ #
114
+ # Connection classes
115
+ #
116
+
117
+ class _ConnectionBase:
118
+ _handle = None
119
+
120
+ def __init__(self, handle, readable=True, writable=True):
121
+ handle = handle.__index__()
122
+ if handle < 0:
123
+ raise ValueError("invalid handle")
124
+ if not readable and not writable:
125
+ raise ValueError(
126
+ "at least one of `readable` and `writable` must be True")
127
+ self._handle = handle
128
+ self._readable = readable
129
+ self._writable = writable
130
+
131
+ # XXX should we use util.Finalize instead of a __del__?
132
+
133
+ def __del__(self):
134
+ if self._handle is not None:
135
+ self._close()
136
+
137
+ def _check_closed(self):
138
+ if self._handle is None:
139
+ raise OSError("handle is closed")
140
+
141
+ def _check_readable(self):
142
+ if not self._readable:
143
+ raise OSError("connection is write-only")
144
+
145
+ def _check_writable(self):
146
+ if not self._writable:
147
+ raise OSError("connection is read-only")
148
+
149
+ def _bad_message_length(self):
150
+ if self._writable:
151
+ self._readable = False
152
+ else:
153
+ self.close()
154
+ raise OSError("bad message length")
155
+
156
+ @property
157
+ def closed(self):
158
+ """True if the connection is closed"""
159
+ return self._handle is None
160
+
161
+ @property
162
+ def readable(self):
163
+ """True if the connection is readable"""
164
+ return self._readable
165
+
166
+ @property
167
+ def writable(self):
168
+ """True if the connection is writable"""
169
+ return self._writable
170
+
171
+ def fileno(self):
172
+ """File descriptor or handle of the connection"""
173
+ self._check_closed()
174
+ return self._handle
175
+
176
+ def close(self):
177
+ """Close the connection"""
178
+ if self._handle is not None:
179
+ try:
180
+ self._close()
181
+ finally:
182
+ self._handle = None
183
+
184
+ def send_bytes(self, buf, offset=0, size=None):
185
+ """Send the bytes data from a bytes-like object"""
186
+ self._check_closed()
187
+ self._check_writable()
188
+ m = memoryview(buf)
189
+ # HACK for byte-indexing of non-bytewise buffers (e.g. array.array)
190
+ if m.itemsize > 1:
191
+ m = memoryview(bytes(m))
192
+ n = len(m)
193
+ if offset < 0:
194
+ raise ValueError("offset is negative")
195
+ if n < offset:
196
+ raise ValueError("buffer length < offset")
197
+ if size is None:
198
+ size = n - offset
199
+ elif size < 0:
200
+ raise ValueError("size is negative")
201
+ elif offset + size > n:
202
+ raise ValueError("buffer length < offset + size")
203
+ self._send_bytes(m[offset:offset + size])
204
+
205
+ def send(self, obj):
206
+ """Send a (picklable) object"""
207
+ self._check_closed()
208
+ self._check_writable()
209
+ self._send_bytes(_ForkingPickler.dumps(obj))
210
+
211
+ def recv_bytes(self, maxlength=None):
212
+ """
213
+ Receive bytes data as a bytes object.
214
+ """
215
+ self._check_closed()
216
+ self._check_readable()
217
+ if maxlength is not None and maxlength < 0:
218
+ raise ValueError("negative maxlength")
219
+ buf = self._recv_bytes(maxlength)
220
+ if buf is None:
221
+ self._bad_message_length()
222
+ return buf.getvalue()
223
+
224
+ def recv_bytes_into(self, buf, offset=0):
225
+ """
226
+ Receive bytes data into a writeable bytes-like object.
227
+ Return the number of bytes read.
228
+ """
229
+ self._check_closed()
230
+ self._check_readable()
231
+ with memoryview(buf) as m:
232
+ # Get bytesize of arbitrary buffer
233
+ itemsize = m.itemsize
234
+ bytesize = itemsize * len(m)
235
+ if offset < 0:
236
+ raise ValueError("negative offset")
237
+ elif offset > bytesize:
238
+ raise ValueError("offset too large")
239
+ result = self._recv_bytes()
240
+ size = result.tell()
241
+ if bytesize < offset + size:
242
+ raise BufferTooShort(result.getvalue())
243
+ # Message can fit in dest
244
+ result.seek(0)
245
+ result.readinto(m[offset // itemsize :
246
+ (offset + size) // itemsize])
247
+ return size
248
+
249
+ def recv(self):
250
+ """Receive a (picklable) object"""
251
+ self._check_closed()
252
+ self._check_readable()
253
+ buf = self._recv_bytes()
254
+ return _ForkingPickler.loads(buf.getbuffer())
255
+
256
+ def poll(self, timeout=0.0):
257
+ """Whether there is any input available to be read"""
258
+ self._check_closed()
259
+ self._check_readable()
260
+ return self._poll(timeout)
261
+
262
+ def __enter__(self):
263
+ return self
264
+
265
+ def __exit__(self, exc_type, exc_value, exc_tb):
266
+ self.close()
267
+
268
+
269
+ if _winapi:
270
+
271
+ class PipeConnection(_ConnectionBase):
272
+ """
273
+ Connection class based on a Windows named pipe.
274
+ Overlapped I/O is used, so the handles must have been created
275
+ with FILE_FLAG_OVERLAPPED.
276
+ """
277
+ _got_empty_message = False
278
+
279
+ def _close(self, _CloseHandle=_winapi.CloseHandle):
280
+ _CloseHandle(self._handle)
281
+
282
+ def _send_bytes(self, buf):
283
+ ov, err = _winapi.WriteFile(self._handle, buf, overlapped=True)
284
+ try:
285
+ if err == _winapi.ERROR_IO_PENDING:
286
+ waitres = _winapi.WaitForMultipleObjects(
287
+ [ov.event], False, INFINITE)
288
+ assert waitres == WAIT_OBJECT_0
289
+ except:
290
+ ov.cancel()
291
+ raise
292
+ finally:
293
+ nwritten, err = ov.GetOverlappedResult(True)
294
+ assert err == 0
295
+ assert nwritten == len(buf)
296
+
297
+ def _recv_bytes(self, maxsize=None):
298
+ if self._got_empty_message:
299
+ self._got_empty_message = False
300
+ return io.BytesIO()
301
+ else:
302
+ bsize = 128 if maxsize is None else min(maxsize, 128)
303
+ try:
304
+ ov, err = _winapi.ReadFile(self._handle, bsize,
305
+ overlapped=True)
306
+ try:
307
+ if err == _winapi.ERROR_IO_PENDING:
308
+ waitres = _winapi.WaitForMultipleObjects(
309
+ [ov.event], False, INFINITE)
310
+ assert waitres == WAIT_OBJECT_0
311
+ except:
312
+ ov.cancel()
313
+ raise
314
+ finally:
315
+ nread, err = ov.GetOverlappedResult(True)
316
+ if err == 0:
317
+ f = io.BytesIO()
318
+ f.write(ov.getbuffer())
319
+ return f
320
+ elif err == _winapi.ERROR_MORE_DATA:
321
+ return self._get_more_data(ov, maxsize)
322
+ except OSError as e:
323
+ if e.winerror == _winapi.ERROR_BROKEN_PIPE:
324
+ raise EOFError
325
+ else:
326
+ raise
327
+ raise RuntimeError("shouldn't get here; expected KeyboardInterrupt")
328
+
329
+ def _poll(self, timeout):
330
+ if (self._got_empty_message or
331
+ _winapi.PeekNamedPipe(self._handle)[0] != 0):
332
+ return True
333
+ return bool(wait([self], timeout))
334
+
335
+ def _get_more_data(self, ov, maxsize):
336
+ buf = ov.getbuffer()
337
+ f = io.BytesIO()
338
+ f.write(buf)
339
+ left = _winapi.PeekNamedPipe(self._handle)[1]
340
+ assert left > 0
341
+ if maxsize is not None and len(buf) + left > maxsize:
342
+ self._bad_message_length()
343
+ ov, err = _winapi.ReadFile(self._handle, left, overlapped=True)
344
+ rbytes, err = ov.GetOverlappedResult(True)
345
+ assert err == 0
346
+ assert rbytes == left
347
+ f.write(ov.getbuffer())
348
+ return f
349
+
350
+
351
+ class Connection(_ConnectionBase):
352
+ """
353
+ Connection class based on an arbitrary file descriptor (Unix only), or
354
+ a socket handle (Windows).
355
+ """
356
+
357
+ if _winapi:
358
+ def _close(self, _close=_multiprocessing.closesocket):
359
+ _close(self._handle)
360
+ _write = _multiprocessing.send
361
+ _read = _multiprocessing.recv
362
+ else:
363
+ def _close(self, _close=os.close):
364
+ _close(self._handle)
365
+ _write = os.write
366
+ _read = os.read
367
+
368
+ def _send(self, buf, write=_write):
369
+ remaining = len(buf)
370
+ while True:
371
+ n = write(self._handle, buf)
372
+ remaining -= n
373
+ if remaining == 0:
374
+ break
375
+ buf = buf[n:]
376
+
377
+ def _recv(self, size, read=_read):
378
+ buf = io.BytesIO()
379
+ handle = self._handle
380
+ remaining = size
381
+ while remaining > 0:
382
+ chunk = read(handle, remaining)
383
+ n = len(chunk)
384
+ if n == 0:
385
+ if remaining == size:
386
+ raise EOFError
387
+ else:
388
+ raise OSError("got end of file during message")
389
+ buf.write(chunk)
390
+ remaining -= n
391
+ return buf
392
+
393
+ def _send_bytes(self, buf):
394
+ n = len(buf)
395
+ if n > 0x7fffffff:
396
+ pre_header = struct.pack("!i", -1)
397
+ header = struct.pack("!Q", n)
398
+ self._send(pre_header)
399
+ self._send(header)
400
+ self._send(buf)
401
+ else:
402
+ # For wire compatibility with 3.7 and lower
403
+ header = struct.pack("!i", n)
404
+ if n > 16384:
405
+ # The payload is large so Nagle's algorithm won't be triggered
406
+ # and we'd better avoid the cost of concatenation.
407
+ self._send(header)
408
+ self._send(buf)
409
+ else:
410
+ # Issue #20540: concatenate before sending, to avoid delays due
411
+ # to Nagle's algorithm on a TCP socket.
412
+ # Also note we want to avoid sending a 0-length buffer separately,
413
+ # to avoid "broken pipe" errors if the other end closed the pipe.
414
+ self._send(header + buf)
415
+
416
+ def _recv_bytes(self, maxsize=None):
417
+ buf = self._recv(4)
418
+ size, = struct.unpack("!i", buf.getvalue())
419
+ if size == -1:
420
+ buf = self._recv(8)
421
+ size, = struct.unpack("!Q", buf.getvalue())
422
+ if maxsize is not None and size > maxsize:
423
+ return None
424
+ return self._recv(size)
425
+
426
+ def _poll(self, timeout):
427
+ r = wait([self], timeout)
428
+ return bool(r)
429
+
430
+
431
+ #
432
+ # Public functions
433
+ #
434
+
435
+ class Listener(object):
436
+ '''
437
+ Returns a listener object.
438
+
439
+ This is a wrapper for a bound socket which is 'listening' for
440
+ connections, or for a Windows named pipe.
441
+ '''
442
+ def __init__(self, address=None, family=None, backlog=1, authkey=None):
443
+ family = family or (address and address_type(address)) \
444
+ or default_family
445
+ address = address or arbitrary_address(family)
446
+
447
+ _validate_family(family)
448
+ if family == 'AF_PIPE':
449
+ self._listener = PipeListener(address, backlog)
450
+ else:
451
+ self._listener = SocketListener(address, family, backlog)
452
+
453
+ if authkey is not None and not isinstance(authkey, bytes):
454
+ raise TypeError('authkey should be a byte string')
455
+
456
+ self._authkey = authkey
457
+
458
+ def accept(self):
459
+ '''
460
+ Accept a connection on the bound socket or named pipe of `self`.
461
+
462
+ Returns a `Connection` object.
463
+ '''
464
+ if self._listener is None:
465
+ raise OSError('listener is closed')
466
+ c = self._listener.accept()
467
+ if self._authkey:
468
+ deliver_challenge(c, self._authkey)
469
+ answer_challenge(c, self._authkey)
470
+ return c
471
+
472
+ def close(self):
473
+ '''
474
+ Close the bound socket or named pipe of `self`.
475
+ '''
476
+ listener = self._listener
477
+ if listener is not None:
478
+ self._listener = None
479
+ listener.close()
480
+
481
+ @property
482
+ def address(self):
483
+ return self._listener._address
484
+
485
+ @property
486
+ def last_accepted(self):
487
+ return self._listener._last_accepted
488
+
489
+ def __enter__(self):
490
+ return self
491
+
492
+ def __exit__(self, exc_type, exc_value, exc_tb):
493
+ self.close()
494
+
495
+
496
+ def Client(address, family=None, authkey=None):
497
+ '''
498
+ Returns a connection to the address of a `Listener`
499
+ '''
500
+ family = family or address_type(address)
501
+ _validate_family(family)
502
+ if family == 'AF_PIPE':
503
+ c = PipeClient(address)
504
+ else:
505
+ c = SocketClient(address)
506
+
507
+ if authkey is not None and not isinstance(authkey, bytes):
508
+ raise TypeError('authkey should be a byte string')
509
+
510
+ if authkey is not None:
511
+ answer_challenge(c, authkey)
512
+ deliver_challenge(c, authkey)
513
+
514
+ return c
515
+
516
+
517
+ if sys.platform != 'win32':
518
+
519
+ def Pipe(duplex=True):
520
+ '''
521
+ Returns pair of connection objects at either end of a pipe
522
+ '''
523
+ if duplex:
524
+ s1, s2 = socket.socketpair()
525
+ s1.setblocking(True)
526
+ s2.setblocking(True)
527
+ c1 = Connection(s1.detach())
528
+ c2 = Connection(s2.detach())
529
+ else:
530
+ fd1, fd2 = os.pipe()
531
+ c1 = Connection(fd1, writable=False)
532
+ c2 = Connection(fd2, readable=False)
533
+
534
+ return c1, c2
535
+
536
+ else:
537
+
538
+ def Pipe(duplex=True):
539
+ '''
540
+ Returns pair of connection objects at either end of a pipe
541
+ '''
542
+ address = arbitrary_address('AF_PIPE')
543
+ if duplex:
544
+ openmode = _winapi.PIPE_ACCESS_DUPLEX
545
+ access = _winapi.GENERIC_READ | _winapi.GENERIC_WRITE
546
+ obsize, ibsize = BUFSIZE, BUFSIZE
547
+ else:
548
+ openmode = _winapi.PIPE_ACCESS_INBOUND
549
+ access = _winapi.GENERIC_WRITE
550
+ obsize, ibsize = 0, BUFSIZE
551
+
552
+ h1 = _winapi.CreateNamedPipe(
553
+ address, openmode | _winapi.FILE_FLAG_OVERLAPPED |
554
+ _winapi.FILE_FLAG_FIRST_PIPE_INSTANCE,
555
+ _winapi.PIPE_TYPE_MESSAGE | _winapi.PIPE_READMODE_MESSAGE |
556
+ _winapi.PIPE_WAIT,
557
+ 1, obsize, ibsize, _winapi.NMPWAIT_WAIT_FOREVER,
558
+ # default security descriptor: the handle cannot be inherited
559
+ _winapi.NULL
560
+ )
561
+ h2 = _winapi.CreateFile(
562
+ address, access, 0, _winapi.NULL, _winapi.OPEN_EXISTING,
563
+ _winapi.FILE_FLAG_OVERLAPPED, _winapi.NULL
564
+ )
565
+ _winapi.SetNamedPipeHandleState(
566
+ h2, _winapi.PIPE_READMODE_MESSAGE, None, None
567
+ )
568
+
569
+ overlapped = _winapi.ConnectNamedPipe(h1, overlapped=True)
570
+ _, err = overlapped.GetOverlappedResult(True)
571
+ assert err == 0
572
+
573
+ c1 = PipeConnection(h1, writable=duplex)
574
+ c2 = PipeConnection(h2, readable=duplex)
575
+
576
+ return c1, c2
577
+
578
+ #
579
+ # Definitions for connections based on sockets
580
+ #
581
+
582
+ class SocketListener(object):
583
+ '''
584
+ Representation of a socket which is bound to an address and listening
585
+ '''
586
+ def __init__(self, address, family, backlog=1):
587
+ self._socket = socket.socket(getattr(socket, family))
588
+ try:
589
+ # SO_REUSEADDR has different semantics on Windows (issue #2550).
590
+ if os.name == 'posix':
591
+ self._socket.setsockopt(socket.SOL_SOCKET,
592
+ socket.SO_REUSEADDR, 1)
593
+ self._socket.setblocking(True)
594
+ self._socket.bind(address)
595
+ self._socket.listen(backlog)
596
+ self._address = self._socket.getsockname()
597
+ except OSError:
598
+ self._socket.close()
599
+ raise
600
+ self._family = family
601
+ self._last_accepted = None
602
+
603
+ if family == 'AF_UNIX' and not util.is_abstract_socket_namespace(address):
604
+ # Linux abstract socket namespaces do not need to be explicitly unlinked
605
+ self._unlink = util.Finalize(
606
+ self, os.unlink, args=(address,), exitpriority=0
607
+ )
608
+ else:
609
+ self._unlink = None
610
+
611
+ def accept(self):
612
+ s, self._last_accepted = self._socket.accept()
613
+ s.setblocking(True)
614
+ return Connection(s.detach())
615
+
616
+ def close(self):
617
+ try:
618
+ self._socket.close()
619
+ finally:
620
+ unlink = self._unlink
621
+ if unlink is not None:
622
+ self._unlink = None
623
+ unlink()
624
+
625
+
626
+ def SocketClient(address):
627
+ '''
628
+ Return a connection object connected to the socket given by `address`
629
+ '''
630
+ family = address_type(address)
631
+ with socket.socket( getattr(socket, family) ) as s:
632
+ s.setblocking(True)
633
+ s.connect(address)
634
+ return Connection(s.detach())
635
+
636
+ #
637
+ # Definitions for connections based on named pipes
638
+ #
639
+
640
+ if sys.platform == 'win32':
641
+
642
+ class PipeListener(object):
643
+ '''
644
+ Representation of a named pipe
645
+ '''
646
+ def __init__(self, address, backlog=None):
647
+ self._address = address
648
+ self._handle_queue = [self._new_handle(first=True)]
649
+
650
+ self._last_accepted = None
651
+ util.sub_debug('listener created with address=%r', self._address)
652
+ self.close = util.Finalize(
653
+ self, PipeListener._finalize_pipe_listener,
654
+ args=(self._handle_queue, self._address), exitpriority=0
655
+ )
656
+
657
+ def _new_handle(self, first=False):
658
+ flags = _winapi.PIPE_ACCESS_DUPLEX | _winapi.FILE_FLAG_OVERLAPPED
659
+ if first:
660
+ flags |= _winapi.FILE_FLAG_FIRST_PIPE_INSTANCE
661
+ return _winapi.CreateNamedPipe(
662
+ self._address, flags,
663
+ _winapi.PIPE_TYPE_MESSAGE | _winapi.PIPE_READMODE_MESSAGE |
664
+ _winapi.PIPE_WAIT,
665
+ _winapi.PIPE_UNLIMITED_INSTANCES, BUFSIZE, BUFSIZE,
666
+ _winapi.NMPWAIT_WAIT_FOREVER, _winapi.NULL
667
+ )
668
+
669
+ def accept(self):
670
+ self._handle_queue.append(self._new_handle())
671
+ handle = self._handle_queue.pop(0)
672
+ try:
673
+ ov = _winapi.ConnectNamedPipe(handle, overlapped=True)
674
+ except OSError as e:
675
+ if e.winerror != _winapi.ERROR_NO_DATA:
676
+ raise
677
+ # ERROR_NO_DATA can occur if a client has already connected,
678
+ # written data and then disconnected -- see Issue 14725.
679
+ else:
680
+ try:
681
+ res = _winapi.WaitForMultipleObjects(
682
+ [ov.event], False, INFINITE)
683
+ except:
684
+ ov.cancel()
685
+ _winapi.CloseHandle(handle)
686
+ raise
687
+ finally:
688
+ _, err = ov.GetOverlappedResult(True)
689
+ assert err == 0
690
+ return PipeConnection(handle)
691
+
692
+ @staticmethod
693
+ def _finalize_pipe_listener(queue, address):
694
+ util.sub_debug('closing listener with address=%r', address)
695
+ for handle in queue:
696
+ _winapi.CloseHandle(handle)
697
+
698
+ def PipeClient(address):
699
+ '''
700
+ Return a connection object connected to the pipe given by `address`
701
+ '''
702
+ t = _init_timeout()
703
+ while 1:
704
+ try:
705
+ _winapi.WaitNamedPipe(address, 1000)
706
+ h = _winapi.CreateFile(
707
+ address, _winapi.GENERIC_READ | _winapi.GENERIC_WRITE,
708
+ 0, _winapi.NULL, _winapi.OPEN_EXISTING,
709
+ _winapi.FILE_FLAG_OVERLAPPED, _winapi.NULL
710
+ )
711
+ except OSError as e:
712
+ if e.winerror not in (_winapi.ERROR_SEM_TIMEOUT,
713
+ _winapi.ERROR_PIPE_BUSY) or _check_timeout(t):
714
+ raise
715
+ else:
716
+ break
717
+ else:
718
+ raise
719
+
720
+ _winapi.SetNamedPipeHandleState(
721
+ h, _winapi.PIPE_READMODE_MESSAGE, None, None
722
+ )
723
+ return PipeConnection(h)
724
+
725
+ #
726
+ # Authentication stuff
727
+ #
728
+
729
+ MESSAGE_LENGTH = 20
730
+
731
+ CHALLENGE = b'#CHALLENGE#'
732
+ WELCOME = b'#WELCOME#'
733
+ FAILURE = b'#FAILURE#'
734
+
735
+ def deliver_challenge(connection, authkey):
736
+ import hmac
737
+ if not isinstance(authkey, bytes):
738
+ raise ValueError(
739
+ "Authkey must be bytes, not {0!s}".format(type(authkey)))
740
+ message = os.urandom(MESSAGE_LENGTH)
741
+ connection.send_bytes(CHALLENGE + message)
742
+ digest = hmac.new(authkey, message, 'md5').digest()
743
+ response = connection.recv_bytes(256) # reject large message
744
+ if response == digest:
745
+ connection.send_bytes(WELCOME)
746
+ else:
747
+ connection.send_bytes(FAILURE)
748
+ raise AuthenticationError('digest received was wrong')
749
+
750
+ def answer_challenge(connection, authkey):
751
+ import hmac
752
+ if not isinstance(authkey, bytes):
753
+ raise ValueError(
754
+ "Authkey must be bytes, not {0!s}".format(type(authkey)))
755
+ message = connection.recv_bytes(256) # reject large message
756
+ assert message[:len(CHALLENGE)] == CHALLENGE, 'message = %r' % message
757
+ message = message[len(CHALLENGE):]
758
+ digest = hmac.new(authkey, message, 'md5').digest()
759
+ connection.send_bytes(digest)
760
+ response = connection.recv_bytes(256) # reject large message
761
+ if response != WELCOME:
762
+ raise AuthenticationError('digest sent was rejected')
763
+
764
+ #
765
+ # Support for using xmlrpclib for serialization
766
+ #
767
+
768
+ class ConnectionWrapper(object):
769
+ def __init__(self, conn, dumps, loads):
770
+ self._conn = conn
771
+ self._dumps = dumps
772
+ self._loads = loads
773
+ for attr in ('fileno', 'close', 'poll', 'recv_bytes', 'send_bytes'):
774
+ obj = getattr(conn, attr)
775
+ setattr(self, attr, obj)
776
+ def send(self, obj):
777
+ s = self._dumps(obj)
778
+ self._conn.send_bytes(s)
779
+ def recv(self):
780
+ s = self._conn.recv_bytes()
781
+ return self._loads(s)
782
+
783
+ def _xml_dumps(obj):
784
+ return xmlrpclib.dumps((obj,), None, None, None, 1).encode('utf-8')
785
+
786
+ def _xml_loads(s):
787
+ (obj,), method = xmlrpclib.loads(s.decode('utf-8'))
788
+ return obj
789
+
790
+ class XmlListener(Listener):
791
+ def accept(self):
792
+ global xmlrpclib
793
+ import xmlrpc.client as xmlrpclib
794
+ obj = Listener.accept(self)
795
+ return ConnectionWrapper(obj, _xml_dumps, _xml_loads)
796
+
797
+ def XmlClient(*args, **kwds):
798
+ global xmlrpclib
799
+ import xmlrpc.client as xmlrpclib
800
+ return ConnectionWrapper(Client(*args, **kwds), _xml_dumps, _xml_loads)
801
+
802
+ #
803
+ # Wait
804
+ #
805
+
806
+ if sys.platform == 'win32':
807
+
808
+ def _exhaustive_wait(handles, timeout):
809
+ # Return ALL handles which are currently signalled. (Only
810
+ # returning the first signalled might create starvation issues.)
811
+ L = list(handles)
812
+ ready = []
813
+ while L:
814
+ res = _winapi.WaitForMultipleObjects(L, False, timeout)
815
+ if res == WAIT_TIMEOUT:
816
+ break
817
+ elif WAIT_OBJECT_0 <= res < WAIT_OBJECT_0 + len(L):
818
+ res -= WAIT_OBJECT_0
819
+ elif WAIT_ABANDONED_0 <= res < WAIT_ABANDONED_0 + len(L):
820
+ res -= WAIT_ABANDONED_0
821
+ else:
822
+ raise RuntimeError('Should not get here')
823
+ ready.append(L[res])
824
+ L = L[res+1:]
825
+ timeout = 0
826
+ return ready
827
+
828
+ _ready_errors = {_winapi.ERROR_BROKEN_PIPE, _winapi.ERROR_NETNAME_DELETED}
829
+
830
+ def wait(object_list, timeout=None):
831
+ '''
832
+ Wait till an object in object_list is ready/readable.
833
+
834
+ Returns list of those objects in object_list which are ready/readable.
835
+ '''
836
+ if timeout is None:
837
+ timeout = INFINITE
838
+ elif timeout < 0:
839
+ timeout = 0
840
+ else:
841
+ timeout = int(timeout * 1000 + 0.5)
842
+
843
+ object_list = list(object_list)
844
+ waithandle_to_obj = {}
845
+ ov_list = []
846
+ ready_objects = set()
847
+ ready_handles = set()
848
+
849
+ try:
850
+ for o in object_list:
851
+ try:
852
+ fileno = getattr(o, 'fileno')
853
+ except AttributeError:
854
+ waithandle_to_obj[o.__index__()] = o
855
+ else:
856
+ # start an overlapped read of length zero
857
+ try:
858
+ ov, err = _winapi.ReadFile(fileno(), 0, True)
859
+ except OSError as e:
860
+ ov, err = None, e.winerror
861
+ if err not in _ready_errors:
862
+ raise
863
+ if err == _winapi.ERROR_IO_PENDING:
864
+ ov_list.append(ov)
865
+ waithandle_to_obj[ov.event] = o
866
+ else:
867
+ # If o.fileno() is an overlapped pipe handle and
868
+ # err == 0 then there is a zero length message
869
+ # in the pipe, but it HAS NOT been consumed...
870
+ if ov and sys.getwindowsversion()[:2] >= (6, 2):
871
+ # ... except on Windows 8 and later, where
872
+ # the message HAS been consumed.
873
+ try:
874
+ _, err = ov.GetOverlappedResult(False)
875
+ except OSError as e:
876
+ err = e.winerror
877
+ if not err and hasattr(o, '_got_empty_message'):
878
+ o._got_empty_message = True
879
+ ready_objects.add(o)
880
+ timeout = 0
881
+
882
+ ready_handles = _exhaustive_wait(waithandle_to_obj.keys(), timeout)
883
+ finally:
884
+ # request that overlapped reads stop
885
+ for ov in ov_list:
886
+ ov.cancel()
887
+
888
+ # wait for all overlapped reads to stop
889
+ for ov in ov_list:
890
+ try:
891
+ _, err = ov.GetOverlappedResult(True)
892
+ except OSError as e:
893
+ err = e.winerror
894
+ if err not in _ready_errors:
895
+ raise
896
+ if err != _winapi.ERROR_OPERATION_ABORTED:
897
+ o = waithandle_to_obj[ov.event]
898
+ ready_objects.add(o)
899
+ if err == 0:
900
+ # If o.fileno() is an overlapped pipe handle then
901
+ # a zero length message HAS been consumed.
902
+ if hasattr(o, '_got_empty_message'):
903
+ o._got_empty_message = True
904
+
905
+ ready_objects.update(waithandle_to_obj[h] for h in ready_handles)
906
+ return [o for o in object_list if o in ready_objects]
907
+
908
+ else:
909
+
910
+ import selectors
911
+
912
+ # poll/select have the advantage of not requiring any extra file
913
+ # descriptor, contrarily to epoll/kqueue (also, they require a single
914
+ # syscall).
915
+ if hasattr(selectors, 'PollSelector'):
916
+ _WaitSelector = selectors.PollSelector
917
+ else:
918
+ _WaitSelector = selectors.SelectSelector
919
+
920
+ def wait(object_list, timeout=None):
921
+ '''
922
+ Wait till an object in object_list is ready/readable.
923
+
924
+ Returns list of those objects in object_list which are ready/readable.
925
+ '''
926
+ with _WaitSelector() as selector:
927
+ for obj in object_list:
928
+ selector.register(obj, selectors.EVENT_READ)
929
+
930
+ if timeout is not None:
931
+ deadline = getattr(time,'monotonic',time.time)() + timeout
932
+
933
+ while True:
934
+ ready = selector.select(timeout)
935
+ if ready:
936
+ return [key.fileobj for (key, events) in ready]
937
+ else:
938
+ if timeout is not None:
939
+ timeout = deadline - getattr(time,'monotonic',time.time)()
940
+ if timeout < 0:
941
+ return ready
942
+
943
+ #
944
+ # Make connection and socket objects sharable if possible
945
+ #
946
+
947
+ if sys.platform == 'win32':
948
+ def reduce_connection(conn):
949
+ handle = conn.fileno()
950
+ with socket.fromfd(handle, socket.AF_INET, socket.SOCK_STREAM) as s:
951
+ from . import resource_sharer
952
+ ds = resource_sharer.DupSocket(s)
953
+ return rebuild_connection, (ds, conn.readable, conn.writable)
954
+ def rebuild_connection(ds, readable, writable):
955
+ sock = ds.detach()
956
+ return Connection(sock.detach(), readable, writable)
957
+ reduction.register(Connection, reduce_connection)
958
+
959
+ def reduce_pipe_connection(conn):
960
+ access = ((_winapi.FILE_GENERIC_READ if conn.readable else 0) |
961
+ (_winapi.FILE_GENERIC_WRITE if conn.writable else 0))
962
+ dh = reduction.DupHandle(conn.fileno(), access)
963
+ return rebuild_pipe_connection, (dh, conn.readable, conn.writable)
964
+ def rebuild_pipe_connection(dh, readable, writable):
965
+ handle = dh.detach()
966
+ return PipeConnection(handle, readable, writable)
967
+ reduction.register(PipeConnection, reduce_pipe_connection)
968
+
969
+ else:
970
+ def reduce_connection(conn):
971
+ df = reduction.DupFd(conn.fileno())
972
+ return rebuild_connection, (df, conn.readable, conn.writable)
973
+ def rebuild_connection(df, readable, writable):
974
+ fd = df.detach()
975
+ return Connection(fd, readable, writable)
976
+ reduction.register(Connection, reduce_connection)
env-llmeval/lib/python3.10/site-packages/multiprocess/heap.py ADDED
@@ -0,0 +1,337 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Module which supports allocation of memory from an mmap
3
+ #
4
+ # multiprocessing/heap.py
5
+ #
6
+ # Copyright (c) 2006-2008, R Oudkerk
7
+ # Licensed to PSF under a Contributor Agreement.
8
+ #
9
+
10
+ import bisect
11
+ from collections import defaultdict
12
+ import mmap
13
+ import os
14
+ import sys
15
+ import tempfile
16
+ import threading
17
+
18
+ from .context import reduction, assert_spawning
19
+ from . import util
20
+
21
+ __all__ = ['BufferWrapper']
22
+
23
+ #
24
+ # Inheritable class which wraps an mmap, and from which blocks can be allocated
25
+ #
26
+
27
+ if sys.platform == 'win32':
28
+
29
+ import _winapi
30
+
31
+ class Arena(object):
32
+ """
33
+ A shared memory area backed by anonymous memory (Windows).
34
+ """
35
+
36
+ _rand = tempfile._RandomNameSequence()
37
+
38
+ def __init__(self, size):
39
+ self.size = size
40
+ for i in range(100):
41
+ name = 'pym-%d-%s' % (os.getpid(), next(self._rand))
42
+ buf = mmap.mmap(-1, size, tagname=name)
43
+ if _winapi.GetLastError() == 0:
44
+ break
45
+ # We have reopened a preexisting mmap.
46
+ buf.close()
47
+ else:
48
+ raise FileExistsError('Cannot find name for new mmap')
49
+ self.name = name
50
+ self.buffer = buf
51
+ self._state = (self.size, self.name)
52
+
53
+ def __getstate__(self):
54
+ assert_spawning(self)
55
+ return self._state
56
+
57
+ def __setstate__(self, state):
58
+ self.size, self.name = self._state = state
59
+ # Reopen existing mmap
60
+ self.buffer = mmap.mmap(-1, self.size, tagname=self.name)
61
+ # XXX Temporarily preventing buildbot failures while determining
62
+ # XXX the correct long-term fix. See issue 23060
63
+ #assert _winapi.GetLastError() == _winapi.ERROR_ALREADY_EXISTS
64
+
65
+ else:
66
+
67
+ class Arena(object):
68
+ """
69
+ A shared memory area backed by a temporary file (POSIX).
70
+ """
71
+
72
+ if sys.platform == 'linux':
73
+ _dir_candidates = ['/dev/shm']
74
+ else:
75
+ _dir_candidates = []
76
+
77
+ def __init__(self, size, fd=-1):
78
+ self.size = size
79
+ self.fd = fd
80
+ if fd == -1:
81
+ # Arena is created anew (if fd != -1, it means we're coming
82
+ # from rebuild_arena() below)
83
+ self.fd, name = tempfile.mkstemp(
84
+ prefix='pym-%d-'%os.getpid(),
85
+ dir=self._choose_dir(size))
86
+ os.unlink(name)
87
+ util.Finalize(self, os.close, (self.fd,))
88
+ os.ftruncate(self.fd, size)
89
+ self.buffer = mmap.mmap(self.fd, self.size)
90
+
91
+ def _choose_dir(self, size):
92
+ # Choose a non-storage backed directory if possible,
93
+ # to improve performance
94
+ for d in self._dir_candidates:
95
+ st = os.statvfs(d)
96
+ if st.f_bavail * st.f_frsize >= size: # enough free space?
97
+ return d
98
+ return util.get_temp_dir()
99
+
100
+ def reduce_arena(a):
101
+ if a.fd == -1:
102
+ raise ValueError('Arena is unpicklable because '
103
+ 'forking was enabled when it was created')
104
+ return rebuild_arena, (a.size, reduction.DupFd(a.fd))
105
+
106
+ def rebuild_arena(size, dupfd):
107
+ return Arena(size, dupfd.detach())
108
+
109
+ reduction.register(Arena, reduce_arena)
110
+
111
+ #
112
+ # Class allowing allocation of chunks of memory from arenas
113
+ #
114
+
115
+ class Heap(object):
116
+
117
+ # Minimum malloc() alignment
118
+ _alignment = 8
119
+
120
+ _DISCARD_FREE_SPACE_LARGER_THAN = 4 * 1024 ** 2 # 4 MB
121
+ _DOUBLE_ARENA_SIZE_UNTIL = 4 * 1024 ** 2
122
+
123
+ def __init__(self, size=mmap.PAGESIZE):
124
+ self._lastpid = os.getpid()
125
+ self._lock = threading.Lock()
126
+ # Current arena allocation size
127
+ self._size = size
128
+ # A sorted list of available block sizes in arenas
129
+ self._lengths = []
130
+
131
+ # Free block management:
132
+ # - map each block size to a list of `(Arena, start, stop)` blocks
133
+ self._len_to_seq = {}
134
+ # - map `(Arena, start)` tuple to the `(Arena, start, stop)` block
135
+ # starting at that offset
136
+ self._start_to_block = {}
137
+ # - map `(Arena, stop)` tuple to the `(Arena, start, stop)` block
138
+ # ending at that offset
139
+ self._stop_to_block = {}
140
+
141
+ # Map arenas to their `(Arena, start, stop)` blocks in use
142
+ self._allocated_blocks = defaultdict(set)
143
+ self._arenas = []
144
+
145
+ # List of pending blocks to free - see comment in free() below
146
+ self._pending_free_blocks = []
147
+
148
+ # Statistics
149
+ self._n_mallocs = 0
150
+ self._n_frees = 0
151
+
152
+ @staticmethod
153
+ def _roundup(n, alignment):
154
+ # alignment must be a power of 2
155
+ mask = alignment - 1
156
+ return (n + mask) & ~mask
157
+
158
+ def _new_arena(self, size):
159
+ # Create a new arena with at least the given *size*
160
+ length = self._roundup(max(self._size, size), mmap.PAGESIZE)
161
+ # We carve larger and larger arenas, for efficiency, until we
162
+ # reach a large-ish size (roughly L3 cache-sized)
163
+ if self._size < self._DOUBLE_ARENA_SIZE_UNTIL:
164
+ self._size *= 2
165
+ util.info('allocating a new mmap of length %d', length)
166
+ arena = Arena(length)
167
+ self._arenas.append(arena)
168
+ return (arena, 0, length)
169
+
170
+ def _discard_arena(self, arena):
171
+ # Possibly delete the given (unused) arena
172
+ length = arena.size
173
+ # Reusing an existing arena is faster than creating a new one, so
174
+ # we only reclaim space if it's large enough.
175
+ if length < self._DISCARD_FREE_SPACE_LARGER_THAN:
176
+ return
177
+ blocks = self._allocated_blocks.pop(arena)
178
+ assert not blocks
179
+ del self._start_to_block[(arena, 0)]
180
+ del self._stop_to_block[(arena, length)]
181
+ self._arenas.remove(arena)
182
+ seq = self._len_to_seq[length]
183
+ seq.remove((arena, 0, length))
184
+ if not seq:
185
+ del self._len_to_seq[length]
186
+ self._lengths.remove(length)
187
+
188
+ def _malloc(self, size):
189
+ # returns a large enough block -- it might be much larger
190
+ i = bisect.bisect_left(self._lengths, size)
191
+ if i == len(self._lengths):
192
+ return self._new_arena(size)
193
+ else:
194
+ length = self._lengths[i]
195
+ seq = self._len_to_seq[length]
196
+ block = seq.pop()
197
+ if not seq:
198
+ del self._len_to_seq[length], self._lengths[i]
199
+
200
+ (arena, start, stop) = block
201
+ del self._start_to_block[(arena, start)]
202
+ del self._stop_to_block[(arena, stop)]
203
+ return block
204
+
205
+ def _add_free_block(self, block):
206
+ # make block available and try to merge with its neighbours in the arena
207
+ (arena, start, stop) = block
208
+
209
+ try:
210
+ prev_block = self._stop_to_block[(arena, start)]
211
+ except KeyError:
212
+ pass
213
+ else:
214
+ start, _ = self._absorb(prev_block)
215
+
216
+ try:
217
+ next_block = self._start_to_block[(arena, stop)]
218
+ except KeyError:
219
+ pass
220
+ else:
221
+ _, stop = self._absorb(next_block)
222
+
223
+ block = (arena, start, stop)
224
+ length = stop - start
225
+
226
+ try:
227
+ self._len_to_seq[length].append(block)
228
+ except KeyError:
229
+ self._len_to_seq[length] = [block]
230
+ bisect.insort(self._lengths, length)
231
+
232
+ self._start_to_block[(arena, start)] = block
233
+ self._stop_to_block[(arena, stop)] = block
234
+
235
+ def _absorb(self, block):
236
+ # deregister this block so it can be merged with a neighbour
237
+ (arena, start, stop) = block
238
+ del self._start_to_block[(arena, start)]
239
+ del self._stop_to_block[(arena, stop)]
240
+
241
+ length = stop - start
242
+ seq = self._len_to_seq[length]
243
+ seq.remove(block)
244
+ if not seq:
245
+ del self._len_to_seq[length]
246
+ self._lengths.remove(length)
247
+
248
+ return start, stop
249
+
250
+ def _remove_allocated_block(self, block):
251
+ arena, start, stop = block
252
+ blocks = self._allocated_blocks[arena]
253
+ blocks.remove((start, stop))
254
+ if not blocks:
255
+ # Arena is entirely free, discard it from this process
256
+ self._discard_arena(arena)
257
+
258
+ def _free_pending_blocks(self):
259
+ # Free all the blocks in the pending list - called with the lock held.
260
+ while True:
261
+ try:
262
+ block = self._pending_free_blocks.pop()
263
+ except IndexError:
264
+ break
265
+ self._add_free_block(block)
266
+ self._remove_allocated_block(block)
267
+
268
+ def free(self, block):
269
+ # free a block returned by malloc()
270
+ # Since free() can be called asynchronously by the GC, it could happen
271
+ # that it's called while self._lock is held: in that case,
272
+ # self._lock.acquire() would deadlock (issue #12352). To avoid that, a
273
+ # trylock is used instead, and if the lock can't be acquired
274
+ # immediately, the block is added to a list of blocks to be freed
275
+ # synchronously sometimes later from malloc() or free(), by calling
276
+ # _free_pending_blocks() (appending and retrieving from a list is not
277
+ # strictly thread-safe but under CPython it's atomic thanks to the GIL).
278
+ if os.getpid() != self._lastpid:
279
+ raise ValueError(
280
+ "My pid ({0:n}) is not last pid {1:n}".format(
281
+ os.getpid(),self._lastpid))
282
+ if not self._lock.acquire(False):
283
+ # can't acquire the lock right now, add the block to the list of
284
+ # pending blocks to free
285
+ self._pending_free_blocks.append(block)
286
+ else:
287
+ # we hold the lock
288
+ try:
289
+ self._n_frees += 1
290
+ self._free_pending_blocks()
291
+ self._add_free_block(block)
292
+ self._remove_allocated_block(block)
293
+ finally:
294
+ self._lock.release()
295
+
296
+ def malloc(self, size):
297
+ # return a block of right size (possibly rounded up)
298
+ if size < 0:
299
+ raise ValueError("Size {0:n} out of range".format(size))
300
+ if sys.maxsize <= size:
301
+ raise OverflowError("Size {0:n} too large".format(size))
302
+ if os.getpid() != self._lastpid:
303
+ self.__init__() # reinitialize after fork
304
+ with self._lock:
305
+ self._n_mallocs += 1
306
+ # allow pending blocks to be marked available
307
+ self._free_pending_blocks()
308
+ size = self._roundup(max(size, 1), self._alignment)
309
+ (arena, start, stop) = self._malloc(size)
310
+ real_stop = start + size
311
+ if real_stop < stop:
312
+ # if the returned block is larger than necessary, mark
313
+ # the remainder available
314
+ self._add_free_block((arena, real_stop, stop))
315
+ self._allocated_blocks[arena].add((start, real_stop))
316
+ return (arena, start, real_stop)
317
+
318
+ #
319
+ # Class wrapping a block allocated out of a Heap -- can be inherited by child process
320
+ #
321
+
322
+ class BufferWrapper(object):
323
+
324
+ _heap = Heap()
325
+
326
+ def __init__(self, size):
327
+ if size < 0:
328
+ raise ValueError("Size {0:n} out of range".format(size))
329
+ if sys.maxsize <= size:
330
+ raise OverflowError("Size {0:n} too large".format(size))
331
+ block = BufferWrapper._heap.malloc(size)
332
+ self._state = (block, size)
333
+ util.Finalize(self, BufferWrapper._heap.free, args=(block,))
334
+
335
+ def create_memoryview(self):
336
+ (arena, start, stop), size = self._state
337
+ return memoryview(arena.buffer)[start:start+size]
env-llmeval/lib/python3.10/site-packages/multiprocess/popen_spawn_posix.py ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import io
2
+ import os
3
+
4
+ from .context import reduction, set_spawning_popen
5
+ from . import popen_fork
6
+ from . import spawn
7
+ from . import util
8
+
9
+ __all__ = ['Popen']
10
+
11
+
12
+ #
13
+ # Wrapper for an fd used while launching a process
14
+ #
15
+
16
+ class _DupFd(object):
17
+ def __init__(self, fd):
18
+ self.fd = fd
19
+ def detach(self):
20
+ return self.fd
21
+
22
+ #
23
+ # Start child process using a fresh interpreter
24
+ #
25
+
26
+ class Popen(popen_fork.Popen):
27
+ method = 'spawn'
28
+ DupFd = _DupFd
29
+
30
+ def __init__(self, process_obj):
31
+ self._fds = []
32
+ super().__init__(process_obj)
33
+
34
+ def duplicate_for_child(self, fd):
35
+ self._fds.append(fd)
36
+ return fd
37
+
38
+ def _launch(self, process_obj):
39
+ from . import resource_tracker
40
+ tracker_fd = resource_tracker.getfd()
41
+ self._fds.append(tracker_fd)
42
+ prep_data = spawn.get_preparation_data(process_obj._name)
43
+ fp = io.BytesIO()
44
+ set_spawning_popen(self)
45
+ try:
46
+ reduction.dump(prep_data, fp)
47
+ reduction.dump(process_obj, fp)
48
+ finally:
49
+ set_spawning_popen(None)
50
+
51
+ parent_r = child_w = child_r = parent_w = None
52
+ try:
53
+ parent_r, child_w = os.pipe()
54
+ child_r, parent_w = os.pipe()
55
+ cmd = spawn.get_command_line(tracker_fd=tracker_fd,
56
+ pipe_handle=child_r)
57
+ self._fds.extend([child_r, child_w])
58
+ self.pid = util.spawnv_passfds(spawn.get_executable(),
59
+ cmd, self._fds)
60
+ self.sentinel = parent_r
61
+ with open(parent_w, 'wb', closefd=False) as f:
62
+ f.write(fp.getbuffer())
63
+ finally:
64
+ fds_to_close = []
65
+ for fd in (parent_r, parent_w):
66
+ if fd is not None:
67
+ fds_to_close.append(fd)
68
+ self.finalizer = util.Finalize(self, util.close_fds, fds_to_close)
69
+
70
+ for fd in (child_r, child_w):
71
+ if fd is not None:
72
+ os.close(fd)
env-llmeval/lib/python3.10/site-packages/multiprocess/process.py ADDED
@@ -0,0 +1,438 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Module providing the `Process` class which emulates `threading.Thread`
3
+ #
4
+ # multiprocessing/process.py
5
+ #
6
+ # Copyright (c) 2006-2008, R Oudkerk
7
+ # Licensed to PSF under a Contributor Agreement.
8
+ #
9
+
10
+ __all__ = ['BaseProcess', 'current_process', 'active_children',
11
+ 'parent_process']
12
+
13
+ #
14
+ # Imports
15
+ #
16
+
17
+ import os
18
+ import sys
19
+ import signal
20
+ import itertools
21
+ import threading
22
+ from _weakrefset import WeakSet
23
+
24
+ #
25
+ #
26
+ #
27
+
28
+ try:
29
+ ORIGINAL_DIR = os.path.abspath(os.getcwd())
30
+ except OSError:
31
+ ORIGINAL_DIR = None
32
+
33
+ #
34
+ # Public functions
35
+ #
36
+
37
+ def current_process():
38
+ '''
39
+ Return process object representing the current process
40
+ '''
41
+ return _current_process
42
+
43
+ def active_children():
44
+ '''
45
+ Return list of process objects corresponding to live child processes
46
+ '''
47
+ _cleanup()
48
+ return list(_children)
49
+
50
+
51
+ def parent_process():
52
+ '''
53
+ Return process object representing the parent process
54
+ '''
55
+ return _parent_process
56
+
57
+ #
58
+ #
59
+ #
60
+
61
+ def _cleanup():
62
+ # check for processes which have finished
63
+ for p in list(_children):
64
+ if p._popen.poll() is not None:
65
+ _children.discard(p)
66
+
67
+ #
68
+ # The `Process` class
69
+ #
70
+
71
+ class BaseProcess(object):
72
+ '''
73
+ Process objects represent activity that is run in a separate process
74
+
75
+ The class is analogous to `threading.Thread`
76
+ '''
77
+ def _Popen(self):
78
+ raise NotImplementedError
79
+
80
+ def __init__(self, group=None, target=None, name=None, args=(), kwargs={},
81
+ *, daemon=None):
82
+ assert group is None, 'group argument must be None for now'
83
+ count = next(_process_counter)
84
+ self._identity = _current_process._identity + (count,)
85
+ self._config = _current_process._config.copy()
86
+ self._parent_pid = os.getpid()
87
+ self._parent_name = _current_process.name
88
+ self._popen = None
89
+ self._closed = False
90
+ self._target = target
91
+ self._args = tuple(args)
92
+ self._kwargs = dict(kwargs)
93
+ self._name = name or type(self).__name__ + '-' + \
94
+ ':'.join(str(i) for i in self._identity)
95
+ if daemon is not None:
96
+ self.daemon = daemon
97
+ _dangling.add(self)
98
+
99
+ def _check_closed(self):
100
+ if self._closed:
101
+ raise ValueError("process object is closed")
102
+
103
+ def run(self):
104
+ '''
105
+ Method to be run in sub-process; can be overridden in sub-class
106
+ '''
107
+ if self._target:
108
+ self._target(*self._args, **self._kwargs)
109
+
110
+ def start(self):
111
+ '''
112
+ Start child process
113
+ '''
114
+ self._check_closed()
115
+ assert self._popen is None, 'cannot start a process twice'
116
+ assert self._parent_pid == os.getpid(), \
117
+ 'can only start a process object created by current process'
118
+ assert not _current_process._config.get('daemon'), \
119
+ 'daemonic processes are not allowed to have children'
120
+ _cleanup()
121
+ self._popen = self._Popen(self)
122
+ self._sentinel = self._popen.sentinel
123
+ # Avoid a refcycle if the target function holds an indirect
124
+ # reference to the process object (see bpo-30775)
125
+ del self._target, self._args, self._kwargs
126
+ _children.add(self)
127
+
128
+ def terminate(self):
129
+ '''
130
+ Terminate process; sends SIGTERM signal or uses TerminateProcess()
131
+ '''
132
+ self._check_closed()
133
+ self._popen.terminate()
134
+
135
+ def kill(self):
136
+ '''
137
+ Terminate process; sends SIGKILL signal or uses TerminateProcess()
138
+ '''
139
+ self._check_closed()
140
+ self._popen.kill()
141
+
142
+ def join(self, timeout=None):
143
+ '''
144
+ Wait until child process terminates
145
+ '''
146
+ self._check_closed()
147
+ assert self._parent_pid == os.getpid(), 'can only join a child process'
148
+ assert self._popen is not None, 'can only join a started process'
149
+ res = self._popen.wait(timeout)
150
+ if res is not None:
151
+ _children.discard(self)
152
+
153
+ def is_alive(self):
154
+ '''
155
+ Return whether process is alive
156
+ '''
157
+ self._check_closed()
158
+ if self is _current_process:
159
+ return True
160
+ assert self._parent_pid == os.getpid(), 'can only test a child process'
161
+
162
+ if self._popen is None:
163
+ return False
164
+
165
+ returncode = self._popen.poll()
166
+ if returncode is None:
167
+ return True
168
+ else:
169
+ _children.discard(self)
170
+ return False
171
+
172
+ def close(self):
173
+ '''
174
+ Close the Process object.
175
+
176
+ This method releases resources held by the Process object. It is
177
+ an error to call this method if the child process is still running.
178
+ '''
179
+ if self._popen is not None:
180
+ if self._popen.poll() is None:
181
+ raise ValueError("Cannot close a process while it is still running. "
182
+ "You should first call join() or terminate().")
183
+ self._popen.close()
184
+ self._popen = None
185
+ del self._sentinel
186
+ _children.discard(self)
187
+ self._closed = True
188
+
189
+ @property
190
+ def name(self):
191
+ return self._name
192
+
193
+ @name.setter
194
+ def name(self, name):
195
+ assert isinstance(name, str), 'name must be a string'
196
+ self._name = name
197
+
198
+ @property
199
+ def daemon(self):
200
+ '''
201
+ Return whether process is a daemon
202
+ '''
203
+ return self._config.get('daemon', False)
204
+
205
+ @daemon.setter
206
+ def daemon(self, daemonic):
207
+ '''
208
+ Set whether process is a daemon
209
+ '''
210
+ assert self._popen is None, 'process has already started'
211
+ self._config['daemon'] = daemonic
212
+
213
+ @property
214
+ def authkey(self):
215
+ return self._config['authkey']
216
+
217
+ @authkey.setter
218
+ def authkey(self, authkey):
219
+ '''
220
+ Set authorization key of process
221
+ '''
222
+ self._config['authkey'] = AuthenticationString(authkey)
223
+
224
+ @property
225
+ def exitcode(self):
226
+ '''
227
+ Return exit code of process or `None` if it has yet to stop
228
+ '''
229
+ self._check_closed()
230
+ if self._popen is None:
231
+ return self._popen
232
+ return self._popen.poll()
233
+
234
+ @property
235
+ def ident(self):
236
+ '''
237
+ Return identifier (PID) of process or `None` if it has yet to start
238
+ '''
239
+ self._check_closed()
240
+ if self is _current_process:
241
+ return os.getpid()
242
+ else:
243
+ return self._popen and self._popen.pid
244
+
245
+ pid = ident
246
+
247
+ @property
248
+ def sentinel(self):
249
+ '''
250
+ Return a file descriptor (Unix) or handle (Windows) suitable for
251
+ waiting for process termination.
252
+ '''
253
+ self._check_closed()
254
+ try:
255
+ return self._sentinel
256
+ except AttributeError:
257
+ raise ValueError("process not started") from None
258
+
259
+ def __repr__(self):
260
+ exitcode = None
261
+ if self is _current_process:
262
+ status = 'started'
263
+ elif self._closed:
264
+ status = 'closed'
265
+ elif self._parent_pid != os.getpid():
266
+ status = 'unknown'
267
+ elif self._popen is None:
268
+ status = 'initial'
269
+ else:
270
+ exitcode = self._popen.poll()
271
+ if exitcode is not None:
272
+ status = 'stopped'
273
+ else:
274
+ status = 'started'
275
+
276
+ info = [type(self).__name__, 'name=%r' % self._name]
277
+ if self._popen is not None:
278
+ info.append('pid=%s' % self._popen.pid)
279
+ info.append('parent=%s' % self._parent_pid)
280
+ info.append(status)
281
+ if exitcode is not None:
282
+ exitcode = _exitcode_to_name.get(exitcode, exitcode)
283
+ info.append('exitcode=%s' % exitcode)
284
+ if self.daemon:
285
+ info.append('daemon')
286
+ return '<%s>' % ' '.join(info)
287
+
288
+ ##
289
+
290
+ def _bootstrap(self, parent_sentinel=None):
291
+ from . import util, context
292
+ global _current_process, _parent_process, _process_counter, _children
293
+
294
+ try:
295
+ if self._start_method is not None:
296
+ context._force_start_method(self._start_method)
297
+ _process_counter = itertools.count(1)
298
+ _children = set()
299
+ util._close_stdin()
300
+ old_process = _current_process
301
+ _current_process = self
302
+ _parent_process = _ParentProcess(
303
+ self._parent_name, self._parent_pid, parent_sentinel)
304
+ if threading._HAVE_THREAD_NATIVE_ID:
305
+ threading.main_thread()._set_native_id()
306
+ try:
307
+ self._after_fork()
308
+ finally:
309
+ # delay finalization of the old process object until after
310
+ # _run_after_forkers() is executed
311
+ del old_process
312
+ util.info('child process calling self.run()')
313
+ try:
314
+ self.run()
315
+ exitcode = 0
316
+ finally:
317
+ util._exit_function()
318
+ except SystemExit as e:
319
+ if e.code is None:
320
+ exitcode = 0
321
+ elif isinstance(e.code, int):
322
+ exitcode = e.code
323
+ else:
324
+ sys.stderr.write(str(e.code) + '\n')
325
+ exitcode = 1
326
+ except:
327
+ exitcode = 1
328
+ import traceback
329
+ sys.stderr.write('Process %s:\n' % self.name)
330
+ traceback.print_exc()
331
+ finally:
332
+ threading._shutdown()
333
+ util.info('process exiting with exitcode %d' % exitcode)
334
+ util._flush_std_streams()
335
+
336
+ return exitcode
337
+
338
+ @staticmethod
339
+ def _after_fork():
340
+ from . import util
341
+ util._finalizer_registry.clear()
342
+ util._run_after_forkers()
343
+
344
+
345
+ #
346
+ # We subclass bytes to avoid accidental transmission of auth keys over network
347
+ #
348
+
349
+ class AuthenticationString(bytes):
350
+ def __reduce__(self):
351
+ from .context import get_spawning_popen
352
+ if get_spawning_popen() is None:
353
+ raise TypeError(
354
+ 'Pickling an AuthenticationString object is '
355
+ 'disallowed for security reasons'
356
+ )
357
+ return AuthenticationString, (bytes(self),)
358
+
359
+
360
+ #
361
+ # Create object representing the parent process
362
+ #
363
+
364
+ class _ParentProcess(BaseProcess):
365
+
366
+ def __init__(self, name, pid, sentinel):
367
+ self._identity = ()
368
+ self._name = name
369
+ self._pid = pid
370
+ self._parent_pid = None
371
+ self._popen = None
372
+ self._closed = False
373
+ self._sentinel = sentinel
374
+ self._config = {}
375
+
376
+ def is_alive(self):
377
+ from multiprocess.connection import wait
378
+ return not wait([self._sentinel], timeout=0)
379
+
380
+ @property
381
+ def ident(self):
382
+ return self._pid
383
+
384
+ def join(self, timeout=None):
385
+ '''
386
+ Wait until parent process terminates
387
+ '''
388
+ from multiprocess.connection import wait
389
+ wait([self._sentinel], timeout=timeout)
390
+
391
+ pid = ident
392
+
393
+ #
394
+ # Create object representing the main process
395
+ #
396
+
397
+ class _MainProcess(BaseProcess):
398
+
399
+ def __init__(self):
400
+ self._identity = ()
401
+ self._name = 'MainProcess'
402
+ self._parent_pid = None
403
+ self._popen = None
404
+ self._closed = False
405
+ self._config = {'authkey': AuthenticationString(os.urandom(32)),
406
+ 'semprefix': '/mp'}
407
+ # Note that some versions of FreeBSD only allow named
408
+ # semaphores to have names of up to 14 characters. Therefore
409
+ # we choose a short prefix.
410
+ #
411
+ # On MacOSX in a sandbox it may be necessary to use a
412
+ # different prefix -- see #19478.
413
+ #
414
+ # Everything in self._config will be inherited by descendant
415
+ # processes.
416
+
417
+ def close(self):
418
+ pass
419
+
420
+
421
+ _parent_process = None
422
+ _current_process = _MainProcess()
423
+ _process_counter = itertools.count(1)
424
+ _children = set()
425
+ del _MainProcess
426
+
427
+ #
428
+ # Give names to some return codes
429
+ #
430
+
431
+ _exitcode_to_name = {}
432
+
433
+ for name, signum in list(signal.__dict__.items()):
434
+ if name[:3]=='SIG' and '_' not in name:
435
+ _exitcode_to_name[-signum] = f'-{name}'
436
+
437
+ # For debug and leak testing
438
+ _dangling = WeakSet()
env-llmeval/lib/python3.10/site-packages/multiprocess/queues.py ADDED
@@ -0,0 +1,382 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Module implementing queues
3
+ #
4
+ # multiprocessing/queues.py
5
+ #
6
+ # Copyright (c) 2006-2008, R Oudkerk
7
+ # Licensed to PSF under a Contributor Agreement.
8
+ #
9
+
10
+ __all__ = ['Queue', 'SimpleQueue', 'JoinableQueue']
11
+
12
+ import sys
13
+ import os
14
+ import threading
15
+ import collections
16
+ import time
17
+ import types
18
+ import weakref
19
+ import errno
20
+
21
+ from queue import Empty, Full
22
+
23
+ try:
24
+ import _multiprocess as _multiprocessing
25
+ except ImportError:
26
+ import _multiprocessing
27
+
28
+ from . import connection
29
+ from . import context
30
+ _ForkingPickler = context.reduction.ForkingPickler
31
+
32
+ from .util import debug, info, Finalize, register_after_fork, is_exiting
33
+
34
+ #
35
+ # Queue type using a pipe, buffer and thread
36
+ #
37
+
38
+ class Queue(object):
39
+
40
+ def __init__(self, maxsize=0, *, ctx):
41
+ if maxsize <= 0:
42
+ # Can raise ImportError (see issues #3770 and #23400)
43
+ from .synchronize import SEM_VALUE_MAX as maxsize
44
+ self._maxsize = maxsize
45
+ self._reader, self._writer = connection.Pipe(duplex=False)
46
+ self._rlock = ctx.Lock()
47
+ self._opid = os.getpid()
48
+ if sys.platform == 'win32':
49
+ self._wlock = None
50
+ else:
51
+ self._wlock = ctx.Lock()
52
+ self._sem = ctx.BoundedSemaphore(maxsize)
53
+ # For use by concurrent.futures
54
+ self._ignore_epipe = False
55
+ self._reset()
56
+
57
+ if sys.platform != 'win32':
58
+ register_after_fork(self, Queue._after_fork)
59
+
60
+ def __getstate__(self):
61
+ context.assert_spawning(self)
62
+ return (self._ignore_epipe, self._maxsize, self._reader, self._writer,
63
+ self._rlock, self._wlock, self._sem, self._opid)
64
+
65
+ def __setstate__(self, state):
66
+ (self._ignore_epipe, self._maxsize, self._reader, self._writer,
67
+ self._rlock, self._wlock, self._sem, self._opid) = state
68
+ self._reset()
69
+
70
+ def _after_fork(self):
71
+ debug('Queue._after_fork()')
72
+ self._reset(after_fork=True)
73
+
74
+ def _reset(self, after_fork=False):
75
+ if after_fork:
76
+ self._notempty._at_fork_reinit()
77
+ else:
78
+ self._notempty = threading.Condition(threading.Lock())
79
+ self._buffer = collections.deque()
80
+ self._thread = None
81
+ self._jointhread = None
82
+ self._joincancelled = False
83
+ self._closed = False
84
+ self._close = None
85
+ self._send_bytes = self._writer.send_bytes
86
+ self._recv_bytes = self._reader.recv_bytes
87
+ self._poll = self._reader.poll
88
+
89
+ def put(self, obj, block=True, timeout=None):
90
+ if self._closed:
91
+ raise ValueError(f"Queue {self!r} is closed")
92
+ if not self._sem.acquire(block, timeout):
93
+ raise Full
94
+
95
+ with self._notempty:
96
+ if self._thread is None:
97
+ self._start_thread()
98
+ self._buffer.append(obj)
99
+ self._notempty.notify()
100
+
101
+ def get(self, block=True, timeout=None):
102
+ if self._closed:
103
+ raise ValueError(f"Queue {self!r} is closed")
104
+ if block and timeout is None:
105
+ with self._rlock:
106
+ res = self._recv_bytes()
107
+ self._sem.release()
108
+ else:
109
+ if block:
110
+ deadline = getattr(time,'monotonic',time.time)() + timeout
111
+ if not self._rlock.acquire(block, timeout):
112
+ raise Empty
113
+ try:
114
+ if block:
115
+ timeout = deadline - getattr(time,'monotonic',time.time)()
116
+ if not self._poll(timeout):
117
+ raise Empty
118
+ elif not self._poll():
119
+ raise Empty
120
+ res = self._recv_bytes()
121
+ self._sem.release()
122
+ finally:
123
+ self._rlock.release()
124
+ # unserialize the data after having released the lock
125
+ return _ForkingPickler.loads(res)
126
+
127
+ def qsize(self):
128
+ # Raises NotImplementedError on Mac OSX because of broken sem_getvalue()
129
+ return self._maxsize - self._sem._semlock._get_value()
130
+
131
+ def empty(self):
132
+ return not self._poll()
133
+
134
+ def full(self):
135
+ return self._sem._semlock._is_zero()
136
+
137
+ def get_nowait(self):
138
+ return self.get(False)
139
+
140
+ def put_nowait(self, obj):
141
+ return self.put(obj, False)
142
+
143
+ def close(self):
144
+ self._closed = True
145
+ close = self._close
146
+ if close:
147
+ self._close = None
148
+ close()
149
+
150
+ def join_thread(self):
151
+ debug('Queue.join_thread()')
152
+ assert self._closed, "Queue {0!r} not closed".format(self)
153
+ if self._jointhread:
154
+ self._jointhread()
155
+
156
+ def cancel_join_thread(self):
157
+ debug('Queue.cancel_join_thread()')
158
+ self._joincancelled = True
159
+ try:
160
+ self._jointhread.cancel()
161
+ except AttributeError:
162
+ pass
163
+
164
+ def _start_thread(self):
165
+ debug('Queue._start_thread()')
166
+
167
+ # Start thread which transfers data from buffer to pipe
168
+ self._buffer.clear()
169
+ self._thread = threading.Thread(
170
+ target=Queue._feed,
171
+ args=(self._buffer, self._notempty, self._send_bytes,
172
+ self._wlock, self._reader.close, self._writer.close,
173
+ self._ignore_epipe, self._on_queue_feeder_error,
174
+ self._sem),
175
+ name='QueueFeederThread'
176
+ )
177
+ self._thread.daemon = True
178
+
179
+ debug('doing self._thread.start()')
180
+ self._thread.start()
181
+ debug('... done self._thread.start()')
182
+
183
+ if not self._joincancelled:
184
+ self._jointhread = Finalize(
185
+ self._thread, Queue._finalize_join,
186
+ [weakref.ref(self._thread)],
187
+ exitpriority=-5
188
+ )
189
+
190
+ # Send sentinel to the thread queue object when garbage collected
191
+ self._close = Finalize(
192
+ self, Queue._finalize_close,
193
+ [self._buffer, self._notempty],
194
+ exitpriority=10
195
+ )
196
+
197
+ @staticmethod
198
+ def _finalize_join(twr):
199
+ debug('joining queue thread')
200
+ thread = twr()
201
+ if thread is not None:
202
+ thread.join()
203
+ debug('... queue thread joined')
204
+ else:
205
+ debug('... queue thread already dead')
206
+
207
+ @staticmethod
208
+ def _finalize_close(buffer, notempty):
209
+ debug('telling queue thread to quit')
210
+ with notempty:
211
+ buffer.append(_sentinel)
212
+ notempty.notify()
213
+
214
+ @staticmethod
215
+ def _feed(buffer, notempty, send_bytes, writelock, reader_close,
216
+ writer_close, ignore_epipe, onerror, queue_sem):
217
+ debug('starting thread to feed data to pipe')
218
+ nacquire = notempty.acquire
219
+ nrelease = notempty.release
220
+ nwait = notempty.wait
221
+ bpopleft = buffer.popleft
222
+ sentinel = _sentinel
223
+ if sys.platform != 'win32':
224
+ wacquire = writelock.acquire
225
+ wrelease = writelock.release
226
+ else:
227
+ wacquire = None
228
+
229
+ while 1:
230
+ try:
231
+ nacquire()
232
+ try:
233
+ if not buffer:
234
+ nwait()
235
+ finally:
236
+ nrelease()
237
+ try:
238
+ while 1:
239
+ obj = bpopleft()
240
+ if obj is sentinel:
241
+ debug('feeder thread got sentinel -- exiting')
242
+ reader_close()
243
+ writer_close()
244
+ return
245
+
246
+ # serialize the data before acquiring the lock
247
+ obj = _ForkingPickler.dumps(obj)
248
+ if wacquire is None:
249
+ send_bytes(obj)
250
+ else:
251
+ wacquire()
252
+ try:
253
+ send_bytes(obj)
254
+ finally:
255
+ wrelease()
256
+ except IndexError:
257
+ pass
258
+ except Exception as e:
259
+ if ignore_epipe and getattr(e, 'errno', 0) == errno.EPIPE:
260
+ return
261
+ # Since this runs in a daemon thread the resources it uses
262
+ # may be become unusable while the process is cleaning up.
263
+ # We ignore errors which happen after the process has
264
+ # started to cleanup.
265
+ if is_exiting():
266
+ info('error in queue thread: %s', e)
267
+ return
268
+ else:
269
+ # Since the object has not been sent in the queue, we need
270
+ # to decrease the size of the queue. The error acts as
271
+ # if the object had been silently removed from the queue
272
+ # and this step is necessary to have a properly working
273
+ # queue.
274
+ queue_sem.release()
275
+ onerror(e, obj)
276
+
277
+ @staticmethod
278
+ def _on_queue_feeder_error(e, obj):
279
+ """
280
+ Private API hook called when feeding data in the background thread
281
+ raises an exception. For overriding by concurrent.futures.
282
+ """
283
+ import traceback
284
+ traceback.print_exc()
285
+
286
+
287
+ _sentinel = object()
288
+
289
+ #
290
+ # A queue type which also supports join() and task_done() methods
291
+ #
292
+ # Note that if you do not call task_done() for each finished task then
293
+ # eventually the counter's semaphore may overflow causing Bad Things
294
+ # to happen.
295
+ #
296
+
297
+ class JoinableQueue(Queue):
298
+
299
+ def __init__(self, maxsize=0, *, ctx):
300
+ Queue.__init__(self, maxsize, ctx=ctx)
301
+ self._unfinished_tasks = ctx.Semaphore(0)
302
+ self._cond = ctx.Condition()
303
+
304
+ def __getstate__(self):
305
+ return Queue.__getstate__(self) + (self._cond, self._unfinished_tasks)
306
+
307
+ def __setstate__(self, state):
308
+ Queue.__setstate__(self, state[:-2])
309
+ self._cond, self._unfinished_tasks = state[-2:]
310
+
311
+ def put(self, obj, block=True, timeout=None):
312
+ if self._closed:
313
+ raise ValueError(f"Queue {self!r} is closed")
314
+ if not self._sem.acquire(block, timeout):
315
+ raise Full
316
+
317
+ with self._notempty, self._cond:
318
+ if self._thread is None:
319
+ self._start_thread()
320
+ self._buffer.append(obj)
321
+ self._unfinished_tasks.release()
322
+ self._notempty.notify()
323
+
324
+ def task_done(self):
325
+ with self._cond:
326
+ if not self._unfinished_tasks.acquire(False):
327
+ raise ValueError('task_done() called too many times')
328
+ if self._unfinished_tasks._semlock._is_zero():
329
+ self._cond.notify_all()
330
+
331
+ def join(self):
332
+ with self._cond:
333
+ if not self._unfinished_tasks._semlock._is_zero():
334
+ self._cond.wait()
335
+
336
+ #
337
+ # Simplified Queue type -- really just a locked pipe
338
+ #
339
+
340
+ class SimpleQueue(object):
341
+
342
+ def __init__(self, *, ctx):
343
+ self._reader, self._writer = connection.Pipe(duplex=False)
344
+ self._rlock = ctx.Lock()
345
+ self._poll = self._reader.poll
346
+ if sys.platform == 'win32':
347
+ self._wlock = None
348
+ else:
349
+ self._wlock = ctx.Lock()
350
+
351
+ def close(self):
352
+ self._reader.close()
353
+ self._writer.close()
354
+
355
+ def empty(self):
356
+ return not self._poll()
357
+
358
+ def __getstate__(self):
359
+ context.assert_spawning(self)
360
+ return (self._reader, self._writer, self._rlock, self._wlock)
361
+
362
+ def __setstate__(self, state):
363
+ (self._reader, self._writer, self._rlock, self._wlock) = state
364
+ self._poll = self._reader.poll
365
+
366
+ def get(self):
367
+ with self._rlock:
368
+ res = self._reader.recv_bytes()
369
+ # unserialize the data after having released the lock
370
+ return _ForkingPickler.loads(res)
371
+
372
+ def put(self, obj):
373
+ # serialize the data before acquiring the lock
374
+ obj = _ForkingPickler.dumps(obj)
375
+ if self._wlock is None:
376
+ # writes to a message oriented win32 pipe are atomic
377
+ self._writer.send_bytes(obj)
378
+ else:
379
+ with self._wlock:
380
+ self._writer.send_bytes(obj)
381
+
382
+ __class_getitem__ = classmethod(types.GenericAlias)
env-llmeval/lib/python3.10/site-packages/multiprocess/reduction.py ADDED
@@ -0,0 +1,284 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Module which deals with pickling of objects.
3
+ #
4
+ # multiprocessing/reduction.py
5
+ #
6
+ # Copyright (c) 2006-2008, R Oudkerk
7
+ # Licensed to PSF under a Contributor Agreement.
8
+ #
9
+
10
+ from abc import ABCMeta
11
+ import copyreg
12
+ import functools
13
+ import io
14
+ import os
15
+ try:
16
+ import dill as pickle
17
+ except ImportError:
18
+ import pickle
19
+ import socket
20
+ import sys
21
+
22
+ from . import context
23
+
24
+ __all__ = ['send_handle', 'recv_handle', 'ForkingPickler', 'register', 'dump']
25
+
26
+
27
+ HAVE_SEND_HANDLE = (sys.platform == 'win32' or
28
+ (hasattr(socket, 'CMSG_LEN') and
29
+ hasattr(socket, 'SCM_RIGHTS') and
30
+ hasattr(socket.socket, 'sendmsg')))
31
+
32
+ #
33
+ # Pickler subclass
34
+ #
35
+
36
+ class ForkingPickler(pickle.Pickler):
37
+ '''Pickler subclass used by multiprocess.'''
38
+ _extra_reducers = {}
39
+ _copyreg_dispatch_table = copyreg.dispatch_table
40
+
41
+ def __init__(self, *args, **kwds):
42
+ super().__init__(*args, **kwds)
43
+ self.dispatch_table = self._copyreg_dispatch_table.copy()
44
+ self.dispatch_table.update(self._extra_reducers)
45
+
46
+ @classmethod
47
+ def register(cls, type, reduce):
48
+ '''Register a reduce function for a type.'''
49
+ cls._extra_reducers[type] = reduce
50
+
51
+ @classmethod
52
+ def dumps(cls, obj, protocol=None, *args, **kwds):
53
+ buf = io.BytesIO()
54
+ cls(buf, protocol, *args, **kwds).dump(obj)
55
+ return buf.getbuffer()
56
+
57
+ loads = pickle.loads
58
+
59
+ register = ForkingPickler.register
60
+
61
+ def dump(obj, file, protocol=None, *args, **kwds):
62
+ '''Replacement for pickle.dump() using ForkingPickler.'''
63
+ ForkingPickler(file, protocol, *args, **kwds).dump(obj)
64
+
65
+ #
66
+ # Platform specific definitions
67
+ #
68
+
69
+ if sys.platform == 'win32':
70
+ # Windows
71
+ __all__ += ['DupHandle', 'duplicate', 'steal_handle']
72
+ import _winapi
73
+
74
+ def duplicate(handle, target_process=None, inheritable=False,
75
+ *, source_process=None):
76
+ '''Duplicate a handle. (target_process is a handle not a pid!)'''
77
+ current_process = _winapi.GetCurrentProcess()
78
+ if source_process is None:
79
+ source_process = current_process
80
+ if target_process is None:
81
+ target_process = current_process
82
+ return _winapi.DuplicateHandle(
83
+ source_process, handle, target_process,
84
+ 0, inheritable, _winapi.DUPLICATE_SAME_ACCESS)
85
+
86
+ def steal_handle(source_pid, handle):
87
+ '''Steal a handle from process identified by source_pid.'''
88
+ source_process_handle = _winapi.OpenProcess(
89
+ _winapi.PROCESS_DUP_HANDLE, False, source_pid)
90
+ try:
91
+ return _winapi.DuplicateHandle(
92
+ source_process_handle, handle,
93
+ _winapi.GetCurrentProcess(), 0, False,
94
+ _winapi.DUPLICATE_SAME_ACCESS | _winapi.DUPLICATE_CLOSE_SOURCE)
95
+ finally:
96
+ _winapi.CloseHandle(source_process_handle)
97
+
98
+ def send_handle(conn, handle, destination_pid):
99
+ '''Send a handle over a local connection.'''
100
+ dh = DupHandle(handle, _winapi.DUPLICATE_SAME_ACCESS, destination_pid)
101
+ conn.send(dh)
102
+
103
+ def recv_handle(conn):
104
+ '''Receive a handle over a local connection.'''
105
+ return conn.recv().detach()
106
+
107
+ class DupHandle(object):
108
+ '''Picklable wrapper for a handle.'''
109
+ def __init__(self, handle, access, pid=None):
110
+ if pid is None:
111
+ # We just duplicate the handle in the current process and
112
+ # let the receiving process steal the handle.
113
+ pid = os.getpid()
114
+ proc = _winapi.OpenProcess(_winapi.PROCESS_DUP_HANDLE, False, pid)
115
+ try:
116
+ self._handle = _winapi.DuplicateHandle(
117
+ _winapi.GetCurrentProcess(),
118
+ handle, proc, access, False, 0)
119
+ finally:
120
+ _winapi.CloseHandle(proc)
121
+ self._access = access
122
+ self._pid = pid
123
+
124
+ def detach(self):
125
+ '''Get the handle. This should only be called once.'''
126
+ # retrieve handle from process which currently owns it
127
+ if self._pid == os.getpid():
128
+ # The handle has already been duplicated for this process.
129
+ return self._handle
130
+ # We must steal the handle from the process whose pid is self._pid.
131
+ proc = _winapi.OpenProcess(_winapi.PROCESS_DUP_HANDLE, False,
132
+ self._pid)
133
+ try:
134
+ return _winapi.DuplicateHandle(
135
+ proc, self._handle, _winapi.GetCurrentProcess(),
136
+ self._access, False, _winapi.DUPLICATE_CLOSE_SOURCE)
137
+ finally:
138
+ _winapi.CloseHandle(proc)
139
+
140
+ else:
141
+ # Unix
142
+ __all__ += ['DupFd', 'sendfds', 'recvfds']
143
+ import array
144
+
145
+ # On MacOSX we should acknowledge receipt of fds -- see Issue14669
146
+ ACKNOWLEDGE = sys.platform == 'darwin'
147
+
148
+ def sendfds(sock, fds):
149
+ '''Send an array of fds over an AF_UNIX socket.'''
150
+ fds = array.array('i', fds)
151
+ msg = bytes([len(fds) % 256])
152
+ sock.sendmsg([msg], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, fds)])
153
+ if ACKNOWLEDGE and sock.recv(1) != b'A':
154
+ raise RuntimeError('did not receive acknowledgement of fd')
155
+
156
+ def recvfds(sock, size):
157
+ '''Receive an array of fds over an AF_UNIX socket.'''
158
+ a = array.array('i')
159
+ bytes_size = a.itemsize * size
160
+ msg, ancdata, flags, addr = sock.recvmsg(1, socket.CMSG_SPACE(bytes_size))
161
+ if not msg and not ancdata:
162
+ raise EOFError
163
+ try:
164
+ if ACKNOWLEDGE:
165
+ sock.send(b'A')
166
+ if len(ancdata) != 1:
167
+ raise RuntimeError('received %d items of ancdata' %
168
+ len(ancdata))
169
+ cmsg_level, cmsg_type, cmsg_data = ancdata[0]
170
+ if (cmsg_level == socket.SOL_SOCKET and
171
+ cmsg_type == socket.SCM_RIGHTS):
172
+ if len(cmsg_data) % a.itemsize != 0:
173
+ raise ValueError
174
+ a.frombytes(cmsg_data)
175
+ if len(a) % 256 != msg[0]:
176
+ raise AssertionError(
177
+ "Len is {0:n} but msg[0] is {1!r}".format(
178
+ len(a), msg[0]))
179
+ return list(a)
180
+ except (ValueError, IndexError):
181
+ pass
182
+ raise RuntimeError('Invalid data received')
183
+
184
+ def send_handle(conn, handle, destination_pid):
185
+ '''Send a handle over a local connection.'''
186
+ with socket.fromfd(conn.fileno(), socket.AF_UNIX, socket.SOCK_STREAM) as s:
187
+ sendfds(s, [handle])
188
+
189
+ def recv_handle(conn):
190
+ '''Receive a handle over a local connection.'''
191
+ with socket.fromfd(conn.fileno(), socket.AF_UNIX, socket.SOCK_STREAM) as s:
192
+ return recvfds(s, 1)[0]
193
+
194
+ def DupFd(fd):
195
+ '''Return a wrapper for an fd.'''
196
+ popen_obj = context.get_spawning_popen()
197
+ if popen_obj is not None:
198
+ return popen_obj.DupFd(popen_obj.duplicate_for_child(fd))
199
+ elif HAVE_SEND_HANDLE:
200
+ from . import resource_sharer
201
+ return resource_sharer.DupFd(fd)
202
+ else:
203
+ raise ValueError('SCM_RIGHTS appears not to be available')
204
+
205
+ #
206
+ # Try making some callable types picklable
207
+ #
208
+
209
+ def _reduce_method(m):
210
+ if m.__self__ is None:
211
+ return getattr, (m.__class__, m.__func__.__name__)
212
+ else:
213
+ return getattr, (m.__self__, m.__func__.__name__)
214
+ class _C:
215
+ def f(self):
216
+ pass
217
+ register(type(_C().f), _reduce_method)
218
+
219
+
220
+ def _reduce_method_descriptor(m):
221
+ return getattr, (m.__objclass__, m.__name__)
222
+ register(type(list.append), _reduce_method_descriptor)
223
+ register(type(int.__add__), _reduce_method_descriptor)
224
+
225
+
226
+ def _reduce_partial(p):
227
+ return _rebuild_partial, (p.func, p.args, p.keywords or {})
228
+ def _rebuild_partial(func, args, keywords):
229
+ return functools.partial(func, *args, **keywords)
230
+ register(functools.partial, _reduce_partial)
231
+
232
+ #
233
+ # Make sockets picklable
234
+ #
235
+
236
+ if sys.platform == 'win32':
237
+ def _reduce_socket(s):
238
+ from .resource_sharer import DupSocket
239
+ return _rebuild_socket, (DupSocket(s),)
240
+ def _rebuild_socket(ds):
241
+ return ds.detach()
242
+ register(socket.socket, _reduce_socket)
243
+
244
+ else:
245
+ def _reduce_socket(s):
246
+ df = DupFd(s.fileno())
247
+ return _rebuild_socket, (df, s.family, s.type, s.proto)
248
+ def _rebuild_socket(df, family, type, proto):
249
+ fd = df.detach()
250
+ return socket.socket(family, type, proto, fileno=fd)
251
+ register(socket.socket, _reduce_socket)
252
+
253
+
254
+ class AbstractReducer(metaclass=ABCMeta):
255
+ '''Abstract base class for use in implementing a Reduction class
256
+ suitable for use in replacing the standard reduction mechanism
257
+ used in multiprocess.'''
258
+ ForkingPickler = ForkingPickler
259
+ register = register
260
+ dump = dump
261
+ send_handle = send_handle
262
+ recv_handle = recv_handle
263
+
264
+ if sys.platform == 'win32':
265
+ steal_handle = steal_handle
266
+ duplicate = duplicate
267
+ DupHandle = DupHandle
268
+ else:
269
+ sendfds = sendfds
270
+ recvfds = recvfds
271
+ DupFd = DupFd
272
+
273
+ _reduce_method = _reduce_method
274
+ _reduce_method_descriptor = _reduce_method_descriptor
275
+ _rebuild_partial = _rebuild_partial
276
+ _reduce_socket = _reduce_socket
277
+ _rebuild_socket = _rebuild_socket
278
+
279
+ def __init__(self, *args):
280
+ register(type(_C().f), _reduce_method)
281
+ register(type(list.append), _reduce_method_descriptor)
282
+ register(type(int.__add__), _reduce_method_descriptor)
283
+ register(functools.partial, _reduce_partial)
284
+ register(socket.socket, _reduce_socket)
env-llmeval/lib/python3.10/site-packages/multiprocess/resource_tracker.py ADDED
@@ -0,0 +1,242 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ###############################################################################
2
+ # Server process to keep track of unlinked resources (like shared memory
3
+ # segments, semaphores etc.) and clean them.
4
+ #
5
+ # On Unix we run a server process which keeps track of unlinked
6
+ # resources. The server ignores SIGINT and SIGTERM and reads from a
7
+ # pipe. Every other process of the program has a copy of the writable
8
+ # end of the pipe, so we get EOF when all other processes have exited.
9
+ # Then the server process unlinks any remaining resource names.
10
+ #
11
+ # This is important because there may be system limits for such resources: for
12
+ # instance, the system only supports a limited number of named semaphores, and
13
+ # shared-memory segments live in the RAM. If a python process leaks such a
14
+ # resource, this resource will not be removed till the next reboot. Without
15
+ # this resource tracker process, "killall python" would probably leave unlinked
16
+ # resources.
17
+
18
+ import os
19
+ import signal
20
+ import sys
21
+ import threading
22
+ import warnings
23
+
24
+ from . import spawn
25
+ from . import util
26
+
27
+ __all__ = ['ensure_running', 'register', 'unregister']
28
+
29
+ _HAVE_SIGMASK = hasattr(signal, 'pthread_sigmask')
30
+ _IGNORED_SIGNALS = (signal.SIGINT, signal.SIGTERM)
31
+
32
+ _CLEANUP_FUNCS = {
33
+ 'noop': lambda: None,
34
+ }
35
+
36
+ if os.name == 'posix':
37
+ try:
38
+ import _multiprocess as _multiprocessing
39
+ except ImportError:
40
+ import _multiprocessing
41
+ import _posixshmem
42
+
43
+ # Use sem_unlink() to clean up named semaphores.
44
+ #
45
+ # sem_unlink() may be missing if the Python build process detected the
46
+ # absence of POSIX named semaphores. In that case, no named semaphores were
47
+ # ever opened, so no cleanup would be necessary.
48
+ if hasattr(_multiprocessing, 'sem_unlink'):
49
+ _CLEANUP_FUNCS.update({
50
+ 'semaphore': _multiprocessing.sem_unlink,
51
+ })
52
+ _CLEANUP_FUNCS.update({
53
+ 'shared_memory': _posixshmem.shm_unlink,
54
+ })
55
+
56
+
57
+ class ResourceTracker(object):
58
+
59
+ def __init__(self):
60
+ self._lock = threading.Lock()
61
+ self._fd = None
62
+ self._pid = None
63
+
64
+ def _stop(self):
65
+ with self._lock:
66
+ if self._fd is None:
67
+ # not running
68
+ return
69
+
70
+ # closing the "alive" file descriptor stops main()
71
+ os.close(self._fd)
72
+ self._fd = None
73
+
74
+ os.waitpid(self._pid, 0)
75
+ self._pid = None
76
+
77
+ def getfd(self):
78
+ self.ensure_running()
79
+ return self._fd
80
+
81
+ def ensure_running(self):
82
+ '''Make sure that resource tracker process is running.
83
+
84
+ This can be run from any process. Usually a child process will use
85
+ the resource created by its parent.'''
86
+ with self._lock:
87
+ if self._fd is not None:
88
+ # resource tracker was launched before, is it still running?
89
+ if self._check_alive():
90
+ # => still alive
91
+ return
92
+ # => dead, launch it again
93
+ os.close(self._fd)
94
+
95
+ # Clean-up to avoid dangling processes.
96
+ try:
97
+ # _pid can be None if this process is a child from another
98
+ # python process, which has started the resource_tracker.
99
+ if self._pid is not None:
100
+ os.waitpid(self._pid, 0)
101
+ except ChildProcessError:
102
+ # The resource_tracker has already been terminated.
103
+ pass
104
+ self._fd = None
105
+ self._pid = None
106
+
107
+ warnings.warn('resource_tracker: process died unexpectedly, '
108
+ 'relaunching. Some resources might leak.')
109
+
110
+ fds_to_pass = []
111
+ try:
112
+ fds_to_pass.append(sys.stderr.fileno())
113
+ except Exception:
114
+ pass
115
+ cmd = 'from multiprocess.resource_tracker import main;main(%d)'
116
+ r, w = os.pipe()
117
+ try:
118
+ fds_to_pass.append(r)
119
+ # process will out live us, so no need to wait on pid
120
+ exe = spawn.get_executable()
121
+ args = [exe] + util._args_from_interpreter_flags()
122
+ args += ['-c', cmd % r]
123
+ # bpo-33613: Register a signal mask that will block the signals.
124
+ # This signal mask will be inherited by the child that is going
125
+ # to be spawned and will protect the child from a race condition
126
+ # that can make the child die before it registers signal handlers
127
+ # for SIGINT and SIGTERM. The mask is unregistered after spawning
128
+ # the child.
129
+ try:
130
+ if _HAVE_SIGMASK:
131
+ signal.pthread_sigmask(signal.SIG_BLOCK, _IGNORED_SIGNALS)
132
+ pid = util.spawnv_passfds(exe, args, fds_to_pass)
133
+ finally:
134
+ if _HAVE_SIGMASK:
135
+ signal.pthread_sigmask(signal.SIG_UNBLOCK, _IGNORED_SIGNALS)
136
+ except:
137
+ os.close(w)
138
+ raise
139
+ else:
140
+ self._fd = w
141
+ self._pid = pid
142
+ finally:
143
+ os.close(r)
144
+
145
+ def _check_alive(self):
146
+ '''Check that the pipe has not been closed by sending a probe.'''
147
+ try:
148
+ # We cannot use send here as it calls ensure_running, creating
149
+ # a cycle.
150
+ os.write(self._fd, b'PROBE:0:noop\n')
151
+ except OSError:
152
+ return False
153
+ else:
154
+ return True
155
+
156
+ def register(self, name, rtype):
157
+ '''Register name of resource with resource tracker.'''
158
+ self._send('REGISTER', name, rtype)
159
+
160
+ def unregister(self, name, rtype):
161
+ '''Unregister name of resource with resource tracker.'''
162
+ self._send('UNREGISTER', name, rtype)
163
+
164
+ def _send(self, cmd, name, rtype):
165
+ self.ensure_running()
166
+ msg = '{0}:{1}:{2}\n'.format(cmd, name, rtype).encode('ascii')
167
+ if len(msg) > 512:
168
+ # posix guarantees that writes to a pipe of less than PIPE_BUF
169
+ # bytes are atomic, and that PIPE_BUF >= 512
170
+ raise ValueError('msg too long')
171
+ nbytes = os.write(self._fd, msg)
172
+ assert nbytes == len(msg), "nbytes {0:n} but len(msg) {1:n}".format(
173
+ nbytes, len(msg))
174
+
175
+
176
+ _resource_tracker = ResourceTracker()
177
+ ensure_running = _resource_tracker.ensure_running
178
+ register = _resource_tracker.register
179
+ unregister = _resource_tracker.unregister
180
+ getfd = _resource_tracker.getfd
181
+
182
+ def main(fd):
183
+ '''Run resource tracker.'''
184
+ # protect the process from ^C and "killall python" etc
185
+ signal.signal(signal.SIGINT, signal.SIG_IGN)
186
+ signal.signal(signal.SIGTERM, signal.SIG_IGN)
187
+ if _HAVE_SIGMASK:
188
+ signal.pthread_sigmask(signal.SIG_UNBLOCK, _IGNORED_SIGNALS)
189
+
190
+ for f in (sys.stdin, sys.stdout):
191
+ try:
192
+ f.close()
193
+ except Exception:
194
+ pass
195
+
196
+ cache = {rtype: set() for rtype in _CLEANUP_FUNCS.keys()}
197
+ try:
198
+ # keep track of registered/unregistered resources
199
+ with open(fd, 'rb') as f:
200
+ for line in f:
201
+ try:
202
+ cmd, name, rtype = line.strip().decode('ascii').split(':')
203
+ cleanup_func = _CLEANUP_FUNCS.get(rtype, None)
204
+ if cleanup_func is None:
205
+ raise ValueError(
206
+ f'Cannot register {name} for automatic cleanup: '
207
+ f'unknown resource type {rtype}')
208
+
209
+ if cmd == 'REGISTER':
210
+ cache[rtype].add(name)
211
+ elif cmd == 'UNREGISTER':
212
+ cache[rtype].remove(name)
213
+ elif cmd == 'PROBE':
214
+ pass
215
+ else:
216
+ raise RuntimeError('unrecognized command %r' % cmd)
217
+ except Exception:
218
+ try:
219
+ sys.excepthook(*sys.exc_info())
220
+ except:
221
+ pass
222
+ finally:
223
+ # all processes have terminated; cleanup any remaining resources
224
+ for rtype, rtype_cache in cache.items():
225
+ if rtype_cache:
226
+ try:
227
+ warnings.warn('resource_tracker: There appear to be %d '
228
+ 'leaked %s objects to clean up at shutdown' %
229
+ (len(rtype_cache), rtype))
230
+ except Exception:
231
+ pass
232
+ for name in rtype_cache:
233
+ # For some reason the process which created and registered this
234
+ # resource has failed to unregister it. Presumably it has
235
+ # died. We therefore unlink it.
236
+ try:
237
+ try:
238
+ _CLEANUP_FUNCS[rtype](name)
239
+ except Exception as e:
240
+ warnings.warn('resource_tracker: %r: %s' % (name, e))
241
+ finally:
242
+ pass
env-llmeval/lib/python3.10/site-packages/multiprocess/shared_memory.py ADDED
@@ -0,0 +1,534 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Provides shared memory for direct access across processes.
2
+
3
+ The API of this package is currently provisional. Refer to the
4
+ documentation for details.
5
+ """
6
+
7
+
8
+ __all__ = [ 'SharedMemory', 'ShareableList' ]
9
+
10
+
11
+ from functools import partial
12
+ import mmap
13
+ import os
14
+ import errno
15
+ import struct
16
+ import secrets
17
+ import types
18
+
19
+ if os.name == "nt":
20
+ import _winapi
21
+ _USE_POSIX = False
22
+ else:
23
+ import _posixshmem
24
+ _USE_POSIX = True
25
+
26
+ from . import resource_tracker
27
+
28
+ _O_CREX = os.O_CREAT | os.O_EXCL
29
+
30
+ # FreeBSD (and perhaps other BSDs) limit names to 14 characters.
31
+ _SHM_SAFE_NAME_LENGTH = 14
32
+
33
+ # Shared memory block name prefix
34
+ if _USE_POSIX:
35
+ _SHM_NAME_PREFIX = '/psm_'
36
+ else:
37
+ _SHM_NAME_PREFIX = 'wnsm_'
38
+
39
+
40
+ def _make_filename():
41
+ "Create a random filename for the shared memory object."
42
+ # number of random bytes to use for name
43
+ nbytes = (_SHM_SAFE_NAME_LENGTH - len(_SHM_NAME_PREFIX)) // 2
44
+ assert nbytes >= 2, '_SHM_NAME_PREFIX too long'
45
+ name = _SHM_NAME_PREFIX + secrets.token_hex(nbytes)
46
+ assert len(name) <= _SHM_SAFE_NAME_LENGTH
47
+ return name
48
+
49
+
50
+ class SharedMemory:
51
+ """Creates a new shared memory block or attaches to an existing
52
+ shared memory block.
53
+
54
+ Every shared memory block is assigned a unique name. This enables
55
+ one process to create a shared memory block with a particular name
56
+ so that a different process can attach to that same shared memory
57
+ block using that same name.
58
+
59
+ As a resource for sharing data across processes, shared memory blocks
60
+ may outlive the original process that created them. When one process
61
+ no longer needs access to a shared memory block that might still be
62
+ needed by other processes, the close() method should be called.
63
+ When a shared memory block is no longer needed by any process, the
64
+ unlink() method should be called to ensure proper cleanup."""
65
+
66
+ # Defaults; enables close() and unlink() to run without errors.
67
+ _name = None
68
+ _fd = -1
69
+ _mmap = None
70
+ _buf = None
71
+ _flags = os.O_RDWR
72
+ _mode = 0o600
73
+ _prepend_leading_slash = True if _USE_POSIX else False
74
+
75
+ def __init__(self, name=None, create=False, size=0):
76
+ if not size >= 0:
77
+ raise ValueError("'size' must be a positive integer")
78
+ if create:
79
+ self._flags = _O_CREX | os.O_RDWR
80
+ if size == 0:
81
+ raise ValueError("'size' must be a positive number different from zero")
82
+ if name is None and not self._flags & os.O_EXCL:
83
+ raise ValueError("'name' can only be None if create=True")
84
+
85
+ if _USE_POSIX:
86
+
87
+ # POSIX Shared Memory
88
+
89
+ if name is None:
90
+ while True:
91
+ name = _make_filename()
92
+ try:
93
+ self._fd = _posixshmem.shm_open(
94
+ name,
95
+ self._flags,
96
+ mode=self._mode
97
+ )
98
+ except FileExistsError:
99
+ continue
100
+ self._name = name
101
+ break
102
+ else:
103
+ name = "/" + name if self._prepend_leading_slash else name
104
+ self._fd = _posixshmem.shm_open(
105
+ name,
106
+ self._flags,
107
+ mode=self._mode
108
+ )
109
+ self._name = name
110
+ try:
111
+ if create and size:
112
+ os.ftruncate(self._fd, size)
113
+ stats = os.fstat(self._fd)
114
+ size = stats.st_size
115
+ self._mmap = mmap.mmap(self._fd, size)
116
+ except OSError:
117
+ self.unlink()
118
+ raise
119
+
120
+ resource_tracker.register(self._name, "shared_memory")
121
+
122
+ else:
123
+
124
+ # Windows Named Shared Memory
125
+
126
+ if create:
127
+ while True:
128
+ temp_name = _make_filename() if name is None else name
129
+ # Create and reserve shared memory block with this name
130
+ # until it can be attached to by mmap.
131
+ h_map = _winapi.CreateFileMapping(
132
+ _winapi.INVALID_HANDLE_VALUE,
133
+ _winapi.NULL,
134
+ _winapi.PAGE_READWRITE,
135
+ (size >> 32) & 0xFFFFFFFF,
136
+ size & 0xFFFFFFFF,
137
+ temp_name
138
+ )
139
+ try:
140
+ last_error_code = _winapi.GetLastError()
141
+ if last_error_code == _winapi.ERROR_ALREADY_EXISTS:
142
+ if name is not None:
143
+ raise FileExistsError(
144
+ errno.EEXIST,
145
+ os.strerror(errno.EEXIST),
146
+ name,
147
+ _winapi.ERROR_ALREADY_EXISTS
148
+ )
149
+ else:
150
+ continue
151
+ self._mmap = mmap.mmap(-1, size, tagname=temp_name)
152
+ finally:
153
+ _winapi.CloseHandle(h_map)
154
+ self._name = temp_name
155
+ break
156
+
157
+ else:
158
+ self._name = name
159
+ # Dynamically determine the existing named shared memory
160
+ # block's size which is likely a multiple of mmap.PAGESIZE.
161
+ h_map = _winapi.OpenFileMapping(
162
+ _winapi.FILE_MAP_READ,
163
+ False,
164
+ name
165
+ )
166
+ try:
167
+ p_buf = _winapi.MapViewOfFile(
168
+ h_map,
169
+ _winapi.FILE_MAP_READ,
170
+ 0,
171
+ 0,
172
+ 0
173
+ )
174
+ finally:
175
+ _winapi.CloseHandle(h_map)
176
+ try:
177
+ size = _winapi.VirtualQuerySize(p_buf)
178
+ finally:
179
+ _winapi.UnmapViewOfFile(p_buf)
180
+ self._mmap = mmap.mmap(-1, size, tagname=name)
181
+
182
+ self._size = size
183
+ self._buf = memoryview(self._mmap)
184
+
185
+ def __del__(self):
186
+ try:
187
+ self.close()
188
+ except OSError:
189
+ pass
190
+
191
+ def __reduce__(self):
192
+ return (
193
+ self.__class__,
194
+ (
195
+ self.name,
196
+ False,
197
+ self.size,
198
+ ),
199
+ )
200
+
201
+ def __repr__(self):
202
+ return f'{self.__class__.__name__}({self.name!r}, size={self.size})'
203
+
204
+ @property
205
+ def buf(self):
206
+ "A memoryview of contents of the shared memory block."
207
+ return self._buf
208
+
209
+ @property
210
+ def name(self):
211
+ "Unique name that identifies the shared memory block."
212
+ reported_name = self._name
213
+ if _USE_POSIX and self._prepend_leading_slash:
214
+ if self._name.startswith("/"):
215
+ reported_name = self._name[1:]
216
+ return reported_name
217
+
218
+ @property
219
+ def size(self):
220
+ "Size in bytes."
221
+ return self._size
222
+
223
+ def close(self):
224
+ """Closes access to the shared memory from this instance but does
225
+ not destroy the shared memory block."""
226
+ if self._buf is not None:
227
+ self._buf.release()
228
+ self._buf = None
229
+ if self._mmap is not None:
230
+ self._mmap.close()
231
+ self._mmap = None
232
+ if _USE_POSIX and self._fd >= 0:
233
+ os.close(self._fd)
234
+ self._fd = -1
235
+
236
+ def unlink(self):
237
+ """Requests that the underlying shared memory block be destroyed.
238
+
239
+ In order to ensure proper cleanup of resources, unlink should be
240
+ called once (and only once) across all processes which have access
241
+ to the shared memory block."""
242
+ if _USE_POSIX and self._name:
243
+ _posixshmem.shm_unlink(self._name)
244
+ resource_tracker.unregister(self._name, "shared_memory")
245
+
246
+
247
+ _encoding = "utf8"
248
+
249
+ class ShareableList:
250
+ """Pattern for a mutable list-like object shareable via a shared
251
+ memory block. It differs from the built-in list type in that these
252
+ lists can not change their overall length (i.e. no append, insert,
253
+ etc.)
254
+
255
+ Because values are packed into a memoryview as bytes, the struct
256
+ packing format for any storable value must require no more than 8
257
+ characters to describe its format."""
258
+
259
+ # The shared memory area is organized as follows:
260
+ # - 8 bytes: number of items (N) as a 64-bit integer
261
+ # - (N + 1) * 8 bytes: offsets of each element from the start of the
262
+ # data area
263
+ # - K bytes: the data area storing item values (with encoding and size
264
+ # depending on their respective types)
265
+ # - N * 8 bytes: `struct` format string for each element
266
+ # - N bytes: index into _back_transforms_mapping for each element
267
+ # (for reconstructing the corresponding Python value)
268
+ _types_mapping = {
269
+ int: "q",
270
+ float: "d",
271
+ bool: "xxxxxxx?",
272
+ str: "%ds",
273
+ bytes: "%ds",
274
+ None.__class__: "xxxxxx?x",
275
+ }
276
+ _alignment = 8
277
+ _back_transforms_mapping = {
278
+ 0: lambda value: value, # int, float, bool
279
+ 1: lambda value: value.rstrip(b'\x00').decode(_encoding), # str
280
+ 2: lambda value: value.rstrip(b'\x00'), # bytes
281
+ 3: lambda _value: None, # None
282
+ }
283
+
284
+ @staticmethod
285
+ def _extract_recreation_code(value):
286
+ """Used in concert with _back_transforms_mapping to convert values
287
+ into the appropriate Python objects when retrieving them from
288
+ the list as well as when storing them."""
289
+ if not isinstance(value, (str, bytes, None.__class__)):
290
+ return 0
291
+ elif isinstance(value, str):
292
+ return 1
293
+ elif isinstance(value, bytes):
294
+ return 2
295
+ else:
296
+ return 3 # NoneType
297
+
298
+ def __init__(self, sequence=None, *, name=None):
299
+ if name is None or sequence is not None:
300
+ sequence = sequence or ()
301
+ _formats = [
302
+ self._types_mapping[type(item)]
303
+ if not isinstance(item, (str, bytes))
304
+ else self._types_mapping[type(item)] % (
305
+ self._alignment * (len(item) // self._alignment + 1),
306
+ )
307
+ for item in sequence
308
+ ]
309
+ self._list_len = len(_formats)
310
+ assert sum(len(fmt) <= 8 for fmt in _formats) == self._list_len
311
+ offset = 0
312
+ # The offsets of each list element into the shared memory's
313
+ # data area (0 meaning the start of the data area, not the start
314
+ # of the shared memory area).
315
+ self._allocated_offsets = [0]
316
+ for fmt in _formats:
317
+ offset += self._alignment if fmt[-1] != "s" else int(fmt[:-1])
318
+ self._allocated_offsets.append(offset)
319
+ _recreation_codes = [
320
+ self._extract_recreation_code(item) for item in sequence
321
+ ]
322
+ requested_size = struct.calcsize(
323
+ "q" + self._format_size_metainfo +
324
+ "".join(_formats) +
325
+ self._format_packing_metainfo +
326
+ self._format_back_transform_codes
327
+ )
328
+
329
+ self.shm = SharedMemory(name, create=True, size=requested_size)
330
+ else:
331
+ self.shm = SharedMemory(name)
332
+
333
+ if sequence is not None:
334
+ _enc = _encoding
335
+ struct.pack_into(
336
+ "q" + self._format_size_metainfo,
337
+ self.shm.buf,
338
+ 0,
339
+ self._list_len,
340
+ *(self._allocated_offsets)
341
+ )
342
+ struct.pack_into(
343
+ "".join(_formats),
344
+ self.shm.buf,
345
+ self._offset_data_start,
346
+ *(v.encode(_enc) if isinstance(v, str) else v for v in sequence)
347
+ )
348
+ struct.pack_into(
349
+ self._format_packing_metainfo,
350
+ self.shm.buf,
351
+ self._offset_packing_formats,
352
+ *(v.encode(_enc) for v in _formats)
353
+ )
354
+ struct.pack_into(
355
+ self._format_back_transform_codes,
356
+ self.shm.buf,
357
+ self._offset_back_transform_codes,
358
+ *(_recreation_codes)
359
+ )
360
+
361
+ else:
362
+ self._list_len = len(self) # Obtains size from offset 0 in buffer.
363
+ self._allocated_offsets = list(
364
+ struct.unpack_from(
365
+ self._format_size_metainfo,
366
+ self.shm.buf,
367
+ 1 * 8
368
+ )
369
+ )
370
+
371
+ def _get_packing_format(self, position):
372
+ "Gets the packing format for a single value stored in the list."
373
+ position = position if position >= 0 else position + self._list_len
374
+ if (position >= self._list_len) or (self._list_len < 0):
375
+ raise IndexError("Requested position out of range.")
376
+
377
+ v = struct.unpack_from(
378
+ "8s",
379
+ self.shm.buf,
380
+ self._offset_packing_formats + position * 8
381
+ )[0]
382
+ fmt = v.rstrip(b'\x00')
383
+ fmt_as_str = fmt.decode(_encoding)
384
+
385
+ return fmt_as_str
386
+
387
+ def _get_back_transform(self, position):
388
+ "Gets the back transformation function for a single value."
389
+
390
+ if (position >= self._list_len) or (self._list_len < 0):
391
+ raise IndexError("Requested position out of range.")
392
+
393
+ transform_code = struct.unpack_from(
394
+ "b",
395
+ self.shm.buf,
396
+ self._offset_back_transform_codes + position
397
+ )[0]
398
+ transform_function = self._back_transforms_mapping[transform_code]
399
+
400
+ return transform_function
401
+
402
+ def _set_packing_format_and_transform(self, position, fmt_as_str, value):
403
+ """Sets the packing format and back transformation code for a
404
+ single value in the list at the specified position."""
405
+
406
+ if (position >= self._list_len) or (self._list_len < 0):
407
+ raise IndexError("Requested position out of range.")
408
+
409
+ struct.pack_into(
410
+ "8s",
411
+ self.shm.buf,
412
+ self._offset_packing_formats + position * 8,
413
+ fmt_as_str.encode(_encoding)
414
+ )
415
+
416
+ transform_code = self._extract_recreation_code(value)
417
+ struct.pack_into(
418
+ "b",
419
+ self.shm.buf,
420
+ self._offset_back_transform_codes + position,
421
+ transform_code
422
+ )
423
+
424
+ def __getitem__(self, position):
425
+ position = position if position >= 0 else position + self._list_len
426
+ try:
427
+ offset = self._offset_data_start + self._allocated_offsets[position]
428
+ (v,) = struct.unpack_from(
429
+ self._get_packing_format(position),
430
+ self.shm.buf,
431
+ offset
432
+ )
433
+ except IndexError:
434
+ raise IndexError("index out of range")
435
+
436
+ back_transform = self._get_back_transform(position)
437
+ v = back_transform(v)
438
+
439
+ return v
440
+
441
+ def __setitem__(self, position, value):
442
+ position = position if position >= 0 else position + self._list_len
443
+ try:
444
+ item_offset = self._allocated_offsets[position]
445
+ offset = self._offset_data_start + item_offset
446
+ current_format = self._get_packing_format(position)
447
+ except IndexError:
448
+ raise IndexError("assignment index out of range")
449
+
450
+ if not isinstance(value, (str, bytes)):
451
+ new_format = self._types_mapping[type(value)]
452
+ encoded_value = value
453
+ else:
454
+ allocated_length = self._allocated_offsets[position + 1] - item_offset
455
+
456
+ encoded_value = (value.encode(_encoding)
457
+ if isinstance(value, str) else value)
458
+ if len(encoded_value) > allocated_length:
459
+ raise ValueError("bytes/str item exceeds available storage")
460
+ if current_format[-1] == "s":
461
+ new_format = current_format
462
+ else:
463
+ new_format = self._types_mapping[str] % (
464
+ allocated_length,
465
+ )
466
+
467
+ self._set_packing_format_and_transform(
468
+ position,
469
+ new_format,
470
+ value
471
+ )
472
+ struct.pack_into(new_format, self.shm.buf, offset, encoded_value)
473
+
474
+ def __reduce__(self):
475
+ return partial(self.__class__, name=self.shm.name), ()
476
+
477
+ def __len__(self):
478
+ return struct.unpack_from("q", self.shm.buf, 0)[0]
479
+
480
+ def __repr__(self):
481
+ return f'{self.__class__.__name__}({list(self)}, name={self.shm.name!r})'
482
+
483
+ @property
484
+ def format(self):
485
+ "The struct packing format used by all currently stored items."
486
+ return "".join(
487
+ self._get_packing_format(i) for i in range(self._list_len)
488
+ )
489
+
490
+ @property
491
+ def _format_size_metainfo(self):
492
+ "The struct packing format used for the items' storage offsets."
493
+ return "q" * (self._list_len + 1)
494
+
495
+ @property
496
+ def _format_packing_metainfo(self):
497
+ "The struct packing format used for the items' packing formats."
498
+ return "8s" * self._list_len
499
+
500
+ @property
501
+ def _format_back_transform_codes(self):
502
+ "The struct packing format used for the items' back transforms."
503
+ return "b" * self._list_len
504
+
505
+ @property
506
+ def _offset_data_start(self):
507
+ # - 8 bytes for the list length
508
+ # - (N + 1) * 8 bytes for the element offsets
509
+ return (self._list_len + 2) * 8
510
+
511
+ @property
512
+ def _offset_packing_formats(self):
513
+ return self._offset_data_start + self._allocated_offsets[-1]
514
+
515
+ @property
516
+ def _offset_back_transform_codes(self):
517
+ return self._offset_packing_formats + self._list_len * 8
518
+
519
+ def count(self, value):
520
+ "L.count(value) -> integer -- return number of occurrences of value."
521
+
522
+ return sum(value == entry for entry in self)
523
+
524
+ def index(self, value):
525
+ """L.index(value) -> integer -- return first index of value.
526
+ Raises ValueError if the value is not present."""
527
+
528
+ for position, entry in enumerate(self):
529
+ if value == entry:
530
+ return position
531
+ else:
532
+ raise ValueError(f"{value!r} not in this container")
533
+
534
+ __class_getitem__ = classmethod(types.GenericAlias)
env-llmeval/lib/python3.10/site-packages/multiprocess/sharedctypes.py ADDED
@@ -0,0 +1,240 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Module which supports allocation of ctypes objects from shared memory
3
+ #
4
+ # multiprocessing/sharedctypes.py
5
+ #
6
+ # Copyright (c) 2006-2008, R Oudkerk
7
+ # Licensed to PSF under a Contributor Agreement.
8
+ #
9
+
10
+ import ctypes
11
+ import weakref
12
+
13
+ from . import heap
14
+ from . import get_context
15
+
16
+ from .context import reduction, assert_spawning
17
+ _ForkingPickler = reduction.ForkingPickler
18
+
19
+ __all__ = ['RawValue', 'RawArray', 'Value', 'Array', 'copy', 'synchronized']
20
+
21
+ #
22
+ #
23
+ #
24
+
25
+ typecode_to_type = {
26
+ 'c': ctypes.c_char, 'u': ctypes.c_wchar,
27
+ 'b': ctypes.c_byte, 'B': ctypes.c_ubyte,
28
+ 'h': ctypes.c_short, 'H': ctypes.c_ushort,
29
+ 'i': ctypes.c_int, 'I': ctypes.c_uint,
30
+ 'l': ctypes.c_long, 'L': ctypes.c_ulong,
31
+ 'q': ctypes.c_longlong, 'Q': ctypes.c_ulonglong,
32
+ 'f': ctypes.c_float, 'd': ctypes.c_double
33
+ }
34
+
35
+ #
36
+ #
37
+ #
38
+
39
+ def _new_value(type_):
40
+ size = ctypes.sizeof(type_)
41
+ wrapper = heap.BufferWrapper(size)
42
+ return rebuild_ctype(type_, wrapper, None)
43
+
44
+ def RawValue(typecode_or_type, *args):
45
+ '''
46
+ Returns a ctypes object allocated from shared memory
47
+ '''
48
+ type_ = typecode_to_type.get(typecode_or_type, typecode_or_type)
49
+ obj = _new_value(type_)
50
+ ctypes.memset(ctypes.addressof(obj), 0, ctypes.sizeof(obj))
51
+ obj.__init__(*args)
52
+ return obj
53
+
54
+ def RawArray(typecode_or_type, size_or_initializer):
55
+ '''
56
+ Returns a ctypes array allocated from shared memory
57
+ '''
58
+ type_ = typecode_to_type.get(typecode_or_type, typecode_or_type)
59
+ if isinstance(size_or_initializer, int):
60
+ type_ = type_ * size_or_initializer
61
+ obj = _new_value(type_)
62
+ ctypes.memset(ctypes.addressof(obj), 0, ctypes.sizeof(obj))
63
+ return obj
64
+ else:
65
+ type_ = type_ * len(size_or_initializer)
66
+ result = _new_value(type_)
67
+ result.__init__(*size_or_initializer)
68
+ return result
69
+
70
+ def Value(typecode_or_type, *args, lock=True, ctx=None):
71
+ '''
72
+ Return a synchronization wrapper for a Value
73
+ '''
74
+ obj = RawValue(typecode_or_type, *args)
75
+ if lock is False:
76
+ return obj
77
+ if lock in (True, None):
78
+ ctx = ctx or get_context()
79
+ lock = ctx.RLock()
80
+ if not hasattr(lock, 'acquire'):
81
+ raise AttributeError("%r has no method 'acquire'" % lock)
82
+ return synchronized(obj, lock, ctx=ctx)
83
+
84
+ def Array(typecode_or_type, size_or_initializer, *, lock=True, ctx=None):
85
+ '''
86
+ Return a synchronization wrapper for a RawArray
87
+ '''
88
+ obj = RawArray(typecode_or_type, size_or_initializer)
89
+ if lock is False:
90
+ return obj
91
+ if lock in (True, None):
92
+ ctx = ctx or get_context()
93
+ lock = ctx.RLock()
94
+ if not hasattr(lock, 'acquire'):
95
+ raise AttributeError("%r has no method 'acquire'" % lock)
96
+ return synchronized(obj, lock, ctx=ctx)
97
+
98
+ def copy(obj):
99
+ new_obj = _new_value(type(obj))
100
+ ctypes.pointer(new_obj)[0] = obj
101
+ return new_obj
102
+
103
+ def synchronized(obj, lock=None, ctx=None):
104
+ assert not isinstance(obj, SynchronizedBase), 'object already synchronized'
105
+ ctx = ctx or get_context()
106
+
107
+ if isinstance(obj, ctypes._SimpleCData):
108
+ return Synchronized(obj, lock, ctx)
109
+ elif isinstance(obj, ctypes.Array):
110
+ if obj._type_ is ctypes.c_char:
111
+ return SynchronizedString(obj, lock, ctx)
112
+ return SynchronizedArray(obj, lock, ctx)
113
+ else:
114
+ cls = type(obj)
115
+ try:
116
+ scls = class_cache[cls]
117
+ except KeyError:
118
+ names = [field[0] for field in cls._fields_]
119
+ d = {name: make_property(name) for name in names}
120
+ classname = 'Synchronized' + cls.__name__
121
+ scls = class_cache[cls] = type(classname, (SynchronizedBase,), d)
122
+ return scls(obj, lock, ctx)
123
+
124
+ #
125
+ # Functions for pickling/unpickling
126
+ #
127
+
128
+ def reduce_ctype(obj):
129
+ assert_spawning(obj)
130
+ if isinstance(obj, ctypes.Array):
131
+ return rebuild_ctype, (obj._type_, obj._wrapper, obj._length_)
132
+ else:
133
+ return rebuild_ctype, (type(obj), obj._wrapper, None)
134
+
135
+ def rebuild_ctype(type_, wrapper, length):
136
+ if length is not None:
137
+ type_ = type_ * length
138
+ _ForkingPickler.register(type_, reduce_ctype)
139
+ buf = wrapper.create_memoryview()
140
+ obj = type_.from_buffer(buf)
141
+ obj._wrapper = wrapper
142
+ return obj
143
+
144
+ #
145
+ # Function to create properties
146
+ #
147
+
148
+ def make_property(name):
149
+ try:
150
+ return prop_cache[name]
151
+ except KeyError:
152
+ d = {}
153
+ exec(template % ((name,)*7), d)
154
+ prop_cache[name] = d[name]
155
+ return d[name]
156
+
157
+ template = '''
158
+ def get%s(self):
159
+ self.acquire()
160
+ try:
161
+ return self._obj.%s
162
+ finally:
163
+ self.release()
164
+ def set%s(self, value):
165
+ self.acquire()
166
+ try:
167
+ self._obj.%s = value
168
+ finally:
169
+ self.release()
170
+ %s = property(get%s, set%s)
171
+ '''
172
+
173
+ prop_cache = {}
174
+ class_cache = weakref.WeakKeyDictionary()
175
+
176
+ #
177
+ # Synchronized wrappers
178
+ #
179
+
180
+ class SynchronizedBase(object):
181
+
182
+ def __init__(self, obj, lock=None, ctx=None):
183
+ self._obj = obj
184
+ if lock:
185
+ self._lock = lock
186
+ else:
187
+ ctx = ctx or get_context(force=True)
188
+ self._lock = ctx.RLock()
189
+ self.acquire = self._lock.acquire
190
+ self.release = self._lock.release
191
+
192
+ def __enter__(self):
193
+ return self._lock.__enter__()
194
+
195
+ def __exit__(self, *args):
196
+ return self._lock.__exit__(*args)
197
+
198
+ def __reduce__(self):
199
+ assert_spawning(self)
200
+ return synchronized, (self._obj, self._lock)
201
+
202
+ def get_obj(self):
203
+ return self._obj
204
+
205
+ def get_lock(self):
206
+ return self._lock
207
+
208
+ def __repr__(self):
209
+ return '<%s wrapper for %s>' % (type(self).__name__, self._obj)
210
+
211
+
212
+ class Synchronized(SynchronizedBase):
213
+ value = make_property('value')
214
+
215
+
216
+ class SynchronizedArray(SynchronizedBase):
217
+
218
+ def __len__(self):
219
+ return len(self._obj)
220
+
221
+ def __getitem__(self, i):
222
+ with self:
223
+ return self._obj[i]
224
+
225
+ def __setitem__(self, i, value):
226
+ with self:
227
+ self._obj[i] = value
228
+
229
+ def __getslice__(self, start, stop):
230
+ with self:
231
+ return self._obj[start:stop]
232
+
233
+ def __setslice__(self, start, stop, values):
234
+ with self:
235
+ self._obj[start:stop] = values
236
+
237
+
238
+ class SynchronizedString(SynchronizedArray):
239
+ value = make_property('value')
240
+ raw = make_property('raw')
env-llmeval/lib/python3.10/site-packages/multiprocess/spawn.py ADDED
@@ -0,0 +1,297 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Code used to start processes when using the spawn or forkserver
3
+ # start methods.
4
+ #
5
+ # multiprocessing/spawn.py
6
+ #
7
+ # Copyright (c) 2006-2008, R Oudkerk
8
+ # Licensed to PSF under a Contributor Agreement.
9
+ #
10
+
11
+ import os
12
+ import sys
13
+ import runpy
14
+ import types
15
+
16
+ from . import get_start_method, set_start_method
17
+ from . import process
18
+ from .context import reduction
19
+ from . import util
20
+
21
+ __all__ = ['_main', 'freeze_support', 'set_executable', 'get_executable',
22
+ 'get_preparation_data', 'get_command_line', 'import_main_path']
23
+
24
+ #
25
+ # _python_exe is the assumed path to the python executable.
26
+ # People embedding Python want to modify it.
27
+ #
28
+
29
+ if sys.platform != 'win32':
30
+ WINEXE = False
31
+ WINSERVICE = False
32
+ else:
33
+ WINEXE = getattr(sys, 'frozen', False)
34
+ WINSERVICE = sys.executable.lower().endswith("pythonservice.exe")
35
+
36
+ if WINSERVICE:
37
+ _python_exe = os.path.join(sys.exec_prefix, 'python.exe')
38
+ else:
39
+ _python_exe = sys.executable
40
+
41
+ def set_executable(exe):
42
+ global _python_exe
43
+ _python_exe = exe
44
+
45
+ def get_executable():
46
+ return _python_exe
47
+
48
+ #
49
+ #
50
+ #
51
+
52
+ def is_forking(argv):
53
+ '''
54
+ Return whether commandline indicates we are forking
55
+ '''
56
+ if len(argv) >= 2 and argv[1] == '--multiprocessing-fork':
57
+ return True
58
+ else:
59
+ return False
60
+
61
+
62
+ def freeze_support():
63
+ '''
64
+ Run code for process object if this in not the main process
65
+ '''
66
+ if is_forking(sys.argv):
67
+ kwds = {}
68
+ for arg in sys.argv[2:]:
69
+ name, value = arg.split('=')
70
+ if value == 'None':
71
+ kwds[name] = None
72
+ else:
73
+ kwds[name] = int(value)
74
+ spawn_main(**kwds)
75
+ sys.exit()
76
+
77
+
78
+ def get_command_line(**kwds):
79
+ '''
80
+ Returns prefix of command line used for spawning a child process
81
+ '''
82
+ if getattr(sys, 'frozen', False):
83
+ return ([sys.executable, '--multiprocessing-fork'] +
84
+ ['%s=%r' % item for item in kwds.items()])
85
+ else:
86
+ prog = 'from multiprocess.spawn import spawn_main; spawn_main(%s)'
87
+ prog %= ', '.join('%s=%r' % item for item in kwds.items())
88
+ opts = util._args_from_interpreter_flags()
89
+ return [_python_exe] + opts + ['-c', prog, '--multiprocessing-fork']
90
+
91
+
92
+ def spawn_main(pipe_handle, parent_pid=None, tracker_fd=None):
93
+ '''
94
+ Run code specified by data received over pipe
95
+ '''
96
+ assert is_forking(sys.argv), "Not forking"
97
+ if sys.platform == 'win32':
98
+ import msvcrt
99
+ import _winapi
100
+
101
+ if parent_pid is not None:
102
+ source_process = _winapi.OpenProcess(
103
+ _winapi.SYNCHRONIZE | _winapi.PROCESS_DUP_HANDLE,
104
+ False, parent_pid)
105
+ else:
106
+ source_process = None
107
+ new_handle = reduction.duplicate(pipe_handle,
108
+ source_process=source_process)
109
+ fd = msvcrt.open_osfhandle(new_handle, os.O_RDONLY)
110
+ parent_sentinel = source_process
111
+ else:
112
+ from . import resource_tracker
113
+ resource_tracker._resource_tracker._fd = tracker_fd
114
+ fd = pipe_handle
115
+ parent_sentinel = os.dup(pipe_handle)
116
+ exitcode = _main(fd, parent_sentinel)
117
+ sys.exit(exitcode)
118
+
119
+
120
+ def _main(fd, parent_sentinel):
121
+ with os.fdopen(fd, 'rb', closefd=True) as from_parent:
122
+ process.current_process()._inheriting = True
123
+ try:
124
+ preparation_data = reduction.pickle.load(from_parent)
125
+ prepare(preparation_data)
126
+ self = reduction.pickle.load(from_parent)
127
+ finally:
128
+ del process.current_process()._inheriting
129
+ return self._bootstrap(parent_sentinel)
130
+
131
+
132
+ def _check_not_importing_main():
133
+ if getattr(process.current_process(), '_inheriting', False):
134
+ raise RuntimeError('''
135
+ An attempt has been made to start a new process before the
136
+ current process has finished its bootstrapping phase.
137
+
138
+ This probably means that you are not using fork to start your
139
+ child processes and you have forgotten to use the proper idiom
140
+ in the main module:
141
+
142
+ if __name__ == '__main__':
143
+ freeze_support()
144
+ ...
145
+
146
+ The "freeze_support()" line can be omitted if the program
147
+ is not going to be frozen to produce an executable.''')
148
+
149
+
150
+ def get_preparation_data(name):
151
+ '''
152
+ Return info about parent needed by child to unpickle process object
153
+ '''
154
+ _check_not_importing_main()
155
+ d = dict(
156
+ log_to_stderr=util._log_to_stderr,
157
+ authkey=process.current_process().authkey,
158
+ )
159
+
160
+ if util._logger is not None:
161
+ d['log_level'] = util._logger.getEffectiveLevel()
162
+
163
+ sys_path=sys.path.copy()
164
+ try:
165
+ i = sys_path.index('')
166
+ except ValueError:
167
+ pass
168
+ else:
169
+ sys_path[i] = process.ORIGINAL_DIR
170
+
171
+ d.update(
172
+ name=name,
173
+ sys_path=sys_path,
174
+ sys_argv=sys.argv,
175
+ orig_dir=process.ORIGINAL_DIR,
176
+ dir=os.getcwd(),
177
+ start_method=get_start_method(),
178
+ )
179
+
180
+ # Figure out whether to initialise main in the subprocess as a module
181
+ # or through direct execution (or to leave it alone entirely)
182
+ main_module = sys.modules['__main__']
183
+ main_mod_name = getattr(main_module.__spec__, "name", None)
184
+ if main_mod_name is not None:
185
+ d['init_main_from_name'] = main_mod_name
186
+ elif sys.platform != 'win32' or (not WINEXE and not WINSERVICE):
187
+ main_path = getattr(main_module, '__file__', None)
188
+ if main_path is not None:
189
+ if (not os.path.isabs(main_path) and
190
+ process.ORIGINAL_DIR is not None):
191
+ main_path = os.path.join(process.ORIGINAL_DIR, main_path)
192
+ d['init_main_from_path'] = os.path.normpath(main_path)
193
+
194
+ return d
195
+
196
+ #
197
+ # Prepare current process
198
+ #
199
+
200
+ old_main_modules = []
201
+
202
+ def prepare(data):
203
+ '''
204
+ Try to get current process ready to unpickle process object
205
+ '''
206
+ if 'name' in data:
207
+ process.current_process().name = data['name']
208
+
209
+ if 'authkey' in data:
210
+ process.current_process().authkey = data['authkey']
211
+
212
+ if 'log_to_stderr' in data and data['log_to_stderr']:
213
+ util.log_to_stderr()
214
+
215
+ if 'log_level' in data:
216
+ util.get_logger().setLevel(data['log_level'])
217
+
218
+ if 'sys_path' in data:
219
+ sys.path = data['sys_path']
220
+
221
+ if 'sys_argv' in data:
222
+ sys.argv = data['sys_argv']
223
+
224
+ if 'dir' in data:
225
+ os.chdir(data['dir'])
226
+
227
+ if 'orig_dir' in data:
228
+ process.ORIGINAL_DIR = data['orig_dir']
229
+
230
+ if 'start_method' in data:
231
+ set_start_method(data['start_method'], force=True)
232
+
233
+ if 'init_main_from_name' in data:
234
+ _fixup_main_from_name(data['init_main_from_name'])
235
+ elif 'init_main_from_path' in data:
236
+ _fixup_main_from_path(data['init_main_from_path'])
237
+
238
+ # Multiprocessing module helpers to fix up the main module in
239
+ # spawned subprocesses
240
+ def _fixup_main_from_name(mod_name):
241
+ # __main__.py files for packages, directories, zip archives, etc, run
242
+ # their "main only" code unconditionally, so we don't even try to
243
+ # populate anything in __main__, nor do we make any changes to
244
+ # __main__ attributes
245
+ current_main = sys.modules['__main__']
246
+ if mod_name == "__main__" or mod_name.endswith(".__main__"):
247
+ return
248
+
249
+ # If this process was forked, __main__ may already be populated
250
+ if getattr(current_main.__spec__, "name", None) == mod_name:
251
+ return
252
+
253
+ # Otherwise, __main__ may contain some non-main code where we need to
254
+ # support unpickling it properly. We rerun it as __mp_main__ and make
255
+ # the normal __main__ an alias to that
256
+ old_main_modules.append(current_main)
257
+ main_module = types.ModuleType("__mp_main__")
258
+ main_content = runpy.run_module(mod_name,
259
+ run_name="__mp_main__",
260
+ alter_sys=True)
261
+ main_module.__dict__.update(main_content)
262
+ sys.modules['__main__'] = sys.modules['__mp_main__'] = main_module
263
+
264
+
265
+ def _fixup_main_from_path(main_path):
266
+ # If this process was forked, __main__ may already be populated
267
+ current_main = sys.modules['__main__']
268
+
269
+ # Unfortunately, the main ipython launch script historically had no
270
+ # "if __name__ == '__main__'" guard, so we work around that
271
+ # by treating it like a __main__.py file
272
+ # See https://github.com/ipython/ipython/issues/4698
273
+ main_name = os.path.splitext(os.path.basename(main_path))[0]
274
+ if main_name == 'ipython':
275
+ return
276
+
277
+ # Otherwise, if __file__ already has the setting we expect,
278
+ # there's nothing more to do
279
+ if getattr(current_main, '__file__', None) == main_path:
280
+ return
281
+
282
+ # If the parent process has sent a path through rather than a module
283
+ # name we assume it is an executable script that may contain
284
+ # non-main code that needs to be executed
285
+ old_main_modules.append(current_main)
286
+ main_module = types.ModuleType("__mp_main__")
287
+ main_content = runpy.run_path(main_path,
288
+ run_name="__mp_main__")
289
+ main_module.__dict__.update(main_content)
290
+ sys.modules['__main__'] = sys.modules['__mp_main__'] = main_module
291
+
292
+
293
+ def import_main_path(main_path):
294
+ '''
295
+ Set sys.modules['__main__'] to module at main_path
296
+ '''
297
+ _fixup_main_from_path(main_path)
env-llmeval/lib/python3.10/site-packages/multiprocess/synchronize.py ADDED
@@ -0,0 +1,400 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Module implementing synchronization primitives
3
+ #
4
+ # multiprocessing/synchronize.py
5
+ #
6
+ # Copyright (c) 2006-2008, R Oudkerk
7
+ # Licensed to PSF under a Contributor Agreement.
8
+ #
9
+
10
+ __all__ = [
11
+ 'Lock', 'RLock', 'Semaphore', 'BoundedSemaphore', 'Condition', 'Event'
12
+ ]
13
+
14
+ import threading
15
+ import sys
16
+ import tempfile
17
+ try:
18
+ import _multiprocess as _multiprocessing
19
+ except ImportError:
20
+ import _multiprocessing
21
+ import time
22
+
23
+ from . import context
24
+ from . import process
25
+ from . import util
26
+
27
+ # Try to import the mp.synchronize module cleanly, if it fails
28
+ # raise ImportError for platforms lacking a working sem_open implementation.
29
+ # See issue 3770
30
+ try:
31
+ from _multiprocess import SemLock, sem_unlink
32
+ except ImportError:
33
+ try:
34
+ from _multiprocessing import SemLock, sem_unlink
35
+ except (ImportError):
36
+ raise ImportError("This platform lacks a functioning sem_open" +
37
+ " implementation, therefore, the required" +
38
+ " synchronization primitives needed will not" +
39
+ " function, see issue 3770.")
40
+
41
+ #
42
+ # Constants
43
+ #
44
+
45
+ RECURSIVE_MUTEX, SEMAPHORE = list(range(2))
46
+ SEM_VALUE_MAX = _multiprocessing.SemLock.SEM_VALUE_MAX
47
+
48
+ #
49
+ # Base class for semaphores and mutexes; wraps `_multiprocessing.SemLock`
50
+ #
51
+
52
+ class SemLock(object):
53
+
54
+ _rand = tempfile._RandomNameSequence()
55
+
56
+ def __init__(self, kind, value, maxvalue, *, ctx):
57
+ if ctx is None:
58
+ ctx = context._default_context.get_context()
59
+ name = ctx.get_start_method()
60
+ unlink_now = sys.platform == 'win32' or name == 'fork'
61
+ for i in range(100):
62
+ try:
63
+ sl = self._semlock = _multiprocessing.SemLock(
64
+ kind, value, maxvalue, self._make_name(),
65
+ unlink_now)
66
+ except FileExistsError:
67
+ pass
68
+ else:
69
+ break
70
+ else:
71
+ raise FileExistsError('cannot find name for semaphore')
72
+
73
+ util.debug('created semlock with handle %s' % sl.handle)
74
+ self._make_methods()
75
+
76
+ if sys.platform != 'win32':
77
+ def _after_fork(obj):
78
+ obj._semlock._after_fork()
79
+ util.register_after_fork(self, _after_fork)
80
+
81
+ if self._semlock.name is not None:
82
+ # We only get here if we are on Unix with forking
83
+ # disabled. When the object is garbage collected or the
84
+ # process shuts down we unlink the semaphore name
85
+ from .resource_tracker import register
86
+ register(self._semlock.name, "semaphore")
87
+ util.Finalize(self, SemLock._cleanup, (self._semlock.name,),
88
+ exitpriority=0)
89
+
90
+ @staticmethod
91
+ def _cleanup(name):
92
+ from .resource_tracker import unregister
93
+ sem_unlink(name)
94
+ unregister(name, "semaphore")
95
+
96
+ def _make_methods(self):
97
+ self.acquire = self._semlock.acquire
98
+ self.release = self._semlock.release
99
+
100
+ def __enter__(self):
101
+ return self._semlock.__enter__()
102
+
103
+ def __exit__(self, *args):
104
+ return self._semlock.__exit__(*args)
105
+
106
+ def __getstate__(self):
107
+ context.assert_spawning(self)
108
+ sl = self._semlock
109
+ if sys.platform == 'win32':
110
+ h = context.get_spawning_popen().duplicate_for_child(sl.handle)
111
+ else:
112
+ h = sl.handle
113
+ return (h, sl.kind, sl.maxvalue, sl.name)
114
+
115
+ def __setstate__(self, state):
116
+ self._semlock = _multiprocessing.SemLock._rebuild(*state)
117
+ util.debug('recreated blocker with handle %r' % state[0])
118
+ self._make_methods()
119
+
120
+ @staticmethod
121
+ def _make_name():
122
+ return '%s-%s' % (process.current_process()._config['semprefix'],
123
+ next(SemLock._rand))
124
+
125
+ #
126
+ # Semaphore
127
+ #
128
+
129
+ class Semaphore(SemLock):
130
+
131
+ def __init__(self, value=1, *, ctx):
132
+ SemLock.__init__(self, SEMAPHORE, value, SEM_VALUE_MAX, ctx=ctx)
133
+
134
+ def get_value(self):
135
+ return self._semlock._get_value()
136
+
137
+ def __repr__(self):
138
+ try:
139
+ value = self._semlock._get_value()
140
+ except Exception:
141
+ value = 'unknown'
142
+ return '<%s(value=%s)>' % (self.__class__.__name__, value)
143
+
144
+ #
145
+ # Bounded semaphore
146
+ #
147
+
148
+ class BoundedSemaphore(Semaphore):
149
+
150
+ def __init__(self, value=1, *, ctx):
151
+ SemLock.__init__(self, SEMAPHORE, value, value, ctx=ctx)
152
+
153
+ def __repr__(self):
154
+ try:
155
+ value = self._semlock._get_value()
156
+ except Exception:
157
+ value = 'unknown'
158
+ return '<%s(value=%s, maxvalue=%s)>' % \
159
+ (self.__class__.__name__, value, self._semlock.maxvalue)
160
+
161
+ #
162
+ # Non-recursive lock
163
+ #
164
+
165
+ class Lock(SemLock):
166
+
167
+ def __init__(self, *, ctx):
168
+ SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)
169
+
170
+ def __repr__(self):
171
+ try:
172
+ if self._semlock._is_mine():
173
+ name = process.current_process().name
174
+ if threading.current_thread().name != 'MainThread':
175
+ name += '|' + threading.current_thread().name
176
+ elif self._semlock._get_value() == 1:
177
+ name = 'None'
178
+ elif self._semlock._count() > 0:
179
+ name = 'SomeOtherThread'
180
+ else:
181
+ name = 'SomeOtherProcess'
182
+ except Exception:
183
+ name = 'unknown'
184
+ return '<%s(owner=%s)>' % (self.__class__.__name__, name)
185
+
186
+ #
187
+ # Recursive lock
188
+ #
189
+
190
+ class RLock(SemLock):
191
+
192
+ def __init__(self, *, ctx):
193
+ SemLock.__init__(self, RECURSIVE_MUTEX, 1, 1, ctx=ctx)
194
+
195
+ def __repr__(self):
196
+ try:
197
+ if self._semlock._is_mine():
198
+ name = process.current_process().name
199
+ if threading.current_thread().name != 'MainThread':
200
+ name += '|' + threading.current_thread().name
201
+ count = self._semlock._count()
202
+ elif self._semlock._get_value() == 1:
203
+ name, count = 'None', 0
204
+ elif self._semlock._count() > 0:
205
+ name, count = 'SomeOtherThread', 'nonzero'
206
+ else:
207
+ name, count = 'SomeOtherProcess', 'nonzero'
208
+ except Exception:
209
+ name, count = 'unknown', 'unknown'
210
+ return '<%s(%s, %s)>' % (self.__class__.__name__, name, count)
211
+
212
+ #
213
+ # Condition variable
214
+ #
215
+
216
+ class Condition(object):
217
+
218
+ def __init__(self, lock=None, *, ctx):
219
+ self._lock = lock or ctx.RLock()
220
+ self._sleeping_count = ctx.Semaphore(0)
221
+ self._woken_count = ctx.Semaphore(0)
222
+ self._wait_semaphore = ctx.Semaphore(0)
223
+ self._make_methods()
224
+
225
+ def __getstate__(self):
226
+ context.assert_spawning(self)
227
+ return (self._lock, self._sleeping_count,
228
+ self._woken_count, self._wait_semaphore)
229
+
230
+ def __setstate__(self, state):
231
+ (self._lock, self._sleeping_count,
232
+ self._woken_count, self._wait_semaphore) = state
233
+ self._make_methods()
234
+
235
+ def __enter__(self):
236
+ return self._lock.__enter__()
237
+
238
+ def __exit__(self, *args):
239
+ return self._lock.__exit__(*args)
240
+
241
+ def _make_methods(self):
242
+ self.acquire = self._lock.acquire
243
+ self.release = self._lock.release
244
+
245
+ def __repr__(self):
246
+ try:
247
+ num_waiters = (self._sleeping_count._semlock._get_value() -
248
+ self._woken_count._semlock._get_value())
249
+ except Exception:
250
+ num_waiters = 'unknown'
251
+ return '<%s(%s, %s)>' % (self.__class__.__name__, self._lock, num_waiters)
252
+
253
+ def wait(self, timeout=None):
254
+ assert self._lock._semlock._is_mine(), \
255
+ 'must acquire() condition before using wait()'
256
+
257
+ # indicate that this thread is going to sleep
258
+ self._sleeping_count.release()
259
+
260
+ # release lock
261
+ count = self._lock._semlock._count()
262
+ for i in range(count):
263
+ self._lock.release()
264
+
265
+ try:
266
+ # wait for notification or timeout
267
+ return self._wait_semaphore.acquire(True, timeout)
268
+ finally:
269
+ # indicate that this thread has woken
270
+ self._woken_count.release()
271
+
272
+ # reacquire lock
273
+ for i in range(count):
274
+ self._lock.acquire()
275
+
276
+ def notify(self, n=1):
277
+ assert self._lock._semlock._is_mine(), 'lock is not owned'
278
+ assert not self._wait_semaphore.acquire(
279
+ False), ('notify: Should not have been able to acquire '
280
+ + '_wait_semaphore')
281
+
282
+ # to take account of timeouts since last notify*() we subtract
283
+ # woken_count from sleeping_count and rezero woken_count
284
+ while self._woken_count.acquire(False):
285
+ res = self._sleeping_count.acquire(False)
286
+ assert res, ('notify: Bug in sleeping_count.acquire'
287
+ + '- res should not be False')
288
+
289
+ sleepers = 0
290
+ while sleepers < n and self._sleeping_count.acquire(False):
291
+ self._wait_semaphore.release() # wake up one sleeper
292
+ sleepers += 1
293
+
294
+ if sleepers:
295
+ for i in range(sleepers):
296
+ self._woken_count.acquire() # wait for a sleeper to wake
297
+
298
+ # rezero wait_semaphore in case some timeouts just happened
299
+ while self._wait_semaphore.acquire(False):
300
+ pass
301
+
302
+ def notify_all(self):
303
+ self.notify(n=sys.maxsize)
304
+
305
+ def wait_for(self, predicate, timeout=None):
306
+ result = predicate()
307
+ if result:
308
+ return result
309
+ if timeout is not None:
310
+ endtime = getattr(time,'monotonic',time.time)() + timeout
311
+ else:
312
+ endtime = None
313
+ waittime = None
314
+ while not result:
315
+ if endtime is not None:
316
+ waittime = endtime - getattr(time,'monotonic',time.time)()
317
+ if waittime <= 0:
318
+ break
319
+ self.wait(waittime)
320
+ result = predicate()
321
+ return result
322
+
323
+ #
324
+ # Event
325
+ #
326
+
327
+ class Event(object):
328
+
329
+ def __init__(self, *, ctx):
330
+ self._cond = ctx.Condition(ctx.Lock())
331
+ self._flag = ctx.Semaphore(0)
332
+
333
+ def is_set(self):
334
+ with self._cond:
335
+ if self._flag.acquire(False):
336
+ self._flag.release()
337
+ return True
338
+ return False
339
+
340
+ def set(self):
341
+ with self._cond:
342
+ self._flag.acquire(False)
343
+ self._flag.release()
344
+ self._cond.notify_all()
345
+
346
+ def clear(self):
347
+ with self._cond:
348
+ self._flag.acquire(False)
349
+
350
+ def wait(self, timeout=None):
351
+ with self._cond:
352
+ if self._flag.acquire(False):
353
+ self._flag.release()
354
+ else:
355
+ self._cond.wait(timeout)
356
+
357
+ if self._flag.acquire(False):
358
+ self._flag.release()
359
+ return True
360
+ return False
361
+
362
+ #
363
+ # Barrier
364
+ #
365
+
366
+ class Barrier(threading.Barrier):
367
+
368
+ def __init__(self, parties, action=None, timeout=None, *, ctx):
369
+ import struct
370
+ from .heap import BufferWrapper
371
+ wrapper = BufferWrapper(struct.calcsize('i') * 2)
372
+ cond = ctx.Condition()
373
+ self.__setstate__((parties, action, timeout, cond, wrapper))
374
+ self._state = 0
375
+ self._count = 0
376
+
377
+ def __setstate__(self, state):
378
+ (self._parties, self._action, self._timeout,
379
+ self._cond, self._wrapper) = state
380
+ self._array = self._wrapper.create_memoryview().cast('i')
381
+
382
+ def __getstate__(self):
383
+ return (self._parties, self._action, self._timeout,
384
+ self._cond, self._wrapper)
385
+
386
+ @property
387
+ def _state(self):
388
+ return self._array[0]
389
+
390
+ @_state.setter
391
+ def _state(self, value):
392
+ self._array[0] = value
393
+
394
+ @property
395
+ def _count(self):
396
+ return self._array[1]
397
+
398
+ @_count.setter
399
+ def _count(self, value):
400
+ self._array[1] = value
env-llmeval/lib/python3.10/site-packages/multiprocess/tests/__init__.py ADDED
The diff for this file is too large to render. See raw diff
 
env-llmeval/lib/python3.10/site-packages/multiprocess/tests/__main__.py ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ #
3
+ # Author: Mike McKerns (mmckerns @caltech and @uqfoundation)
4
+ # Copyright (c) 2018-2024 The Uncertainty Quantification Foundation.
5
+ # License: 3-clause BSD. The full license text is available at:
6
+ # - https://github.com/uqfoundation/multiprocess/blob/master/LICENSE
7
+
8
+ import glob
9
+ import os
10
+ import sys
11
+ import subprocess as sp
12
+ python = sys.executable
13
+ try:
14
+ import pox
15
+ python = pox.which_python(version=True) or python
16
+ except ImportError:
17
+ pass
18
+ shell = sys.platform[:3] == 'win'
19
+
20
+ suite = os.path.dirname(__file__) or os.path.curdir
21
+ tests = glob.glob(suite + os.path.sep + 'test_*.py')
22
+ tests = glob.glob(suite + os.path.sep + '__init__.py') + \
23
+ [i for i in tests if 'main' not in i]
24
+
25
+
26
+ if __name__ == '__main__':
27
+
28
+ failed = 0
29
+ for test in tests:
30
+ p = sp.Popen([python, test], shell=shell).wait()
31
+ if p:
32
+ failed = 1
33
+ print('')
34
+ exit(failed)
env-llmeval/lib/python3.10/site-packages/multiprocess/tests/mp_fork_bomb.py ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import multiprocessing, sys
2
+
3
+ def foo():
4
+ print("123")
5
+
6
+ # Because "if __name__ == '__main__'" is missing this will not work
7
+ # correctly on Windows. However, we should get a RuntimeError rather
8
+ # than the Windows equivalent of a fork bomb.
9
+
10
+ if len(sys.argv) > 1:
11
+ multiprocessing.set_start_method(sys.argv[1])
12
+ else:
13
+ multiprocessing.set_start_method('spawn')
14
+
15
+ p = multiprocessing.Process(target=foo)
16
+ p.start()
17
+ p.join()
18
+ sys.exit(p.exitcode)
env-llmeval/lib/python3.10/site-packages/multiprocess/tests/mp_preload.py ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import multiprocessing
2
+
3
+ multiprocessing.Lock()
4
+
5
+
6
+ def f():
7
+ print("ok")
8
+
9
+
10
+ if __name__ == "__main__":
11
+ ctx = multiprocessing.get_context("forkserver")
12
+ modname = "multiprocess.tests.mp_preload"
13
+ # Make sure it's importable
14
+ __import__(modname)
15
+ ctx.set_forkserver_preload([modname])
16
+ proc = ctx.Process(target=f)
17
+ proc.start()
18
+ proc.join()
env-llmeval/lib/python3.10/site-packages/multiprocess/tests/test_multiprocessing_forkserver.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import unittest
2
+ from multiprocess.tests import install_tests_in_module_dict
3
+
4
+ import sys
5
+ from test import support
6
+
7
+ if support.PGO:
8
+ raise unittest.SkipTest("test is not helpful for PGO")
9
+
10
+ if sys.platform == "win32":
11
+ raise unittest.SkipTest("forkserver is not available on Windows")
12
+
13
+ install_tests_in_module_dict(globals(), 'forkserver')
14
+
15
+ if __name__ == '__main__':
16
+ unittest.main()
env-llmeval/lib/python3.10/site-packages/multiprocess/tests/test_multiprocessing_main_handling.py ADDED
@@ -0,0 +1,303 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # tests __main__ module handling in multiprocessing
2
+ from test import support
3
+ from test.support import import_helper
4
+ # Skip tests if _multiprocessing wasn't built.
5
+ import_helper.import_module('_multiprocessing')
6
+
7
+ import importlib
8
+ import importlib.machinery
9
+ import unittest
10
+ import sys
11
+ import os
12
+ import os.path
13
+ import py_compile
14
+
15
+ from test.support import os_helper
16
+ from test.support.script_helper import (
17
+ make_pkg, make_script, make_zip_pkg, make_zip_script,
18
+ assert_python_ok)
19
+
20
+ if support.PGO:
21
+ raise unittest.SkipTest("test is not helpful for PGO")
22
+
23
+ # Look up which start methods are available to test
24
+ import multiprocess as multiprocessing
25
+ AVAILABLE_START_METHODS = set(multiprocessing.get_all_start_methods())
26
+
27
+ # Issue #22332: Skip tests if sem_open implementation is broken.
28
+ import_helper.import_module('multiprocess.synchronize')
29
+
30
+ verbose = support.verbose
31
+
32
+ test_source = """\
33
+ # multiprocessing includes all sorts of shenanigans to make __main__
34
+ # attributes accessible in the subprocess in a pickle compatible way.
35
+
36
+ # We run the "doesn't work in the interactive interpreter" example from
37
+ # the docs to make sure it *does* work from an executed __main__,
38
+ # regardless of the invocation mechanism
39
+
40
+ import sys
41
+ import time
42
+ sys.path.extend({0})
43
+ from multiprocess import Pool, set_start_method
44
+
45
+ # We use this __main__ defined function in the map call below in order to
46
+ # check that multiprocessing in correctly running the unguarded
47
+ # code in child processes and then making it available as __main__
48
+ def f(x):
49
+ return x*x
50
+
51
+ # Check explicit relative imports
52
+ if "check_sibling" in __file__:
53
+ # We're inside a package and not in a __main__.py file
54
+ # so make sure explicit relative imports work correctly
55
+ from . import sibling
56
+
57
+ if __name__ == '__main__':
58
+ start_method = sys.argv[1]
59
+ set_start_method(start_method)
60
+ results = []
61
+ with Pool(5) as pool:
62
+ pool.map_async(f, [1, 2, 3], callback=results.extend)
63
+ start_time = getattr(time,'monotonic',time.time)()
64
+ while not results:
65
+ time.sleep(0.05)
66
+ # up to 1 min to report the results
67
+ dt = getattr(time,'monotonic',time.time)() - start_time
68
+ if dt > 60.0:
69
+ raise RuntimeError("Timed out waiting for results (%.1f sec)" % dt)
70
+
71
+ results.sort()
72
+ print(start_method, "->", results)
73
+
74
+ pool.join()
75
+ """.format(sys.path)
76
+
77
+ test_source_main_skipped_in_children = """\
78
+ # __main__.py files have an implied "if __name__ == '__main__'" so
79
+ # multiprocessing should always skip running them in child processes
80
+
81
+ # This means we can't use __main__ defined functions in child processes,
82
+ # so we just use "int" as a passthrough operation below
83
+
84
+ if __name__ != "__main__":
85
+ raise RuntimeError("Should only be called as __main__!")
86
+
87
+ import sys
88
+ import time
89
+ sys.path.extend({0})
90
+ from multiprocess import Pool, set_start_method
91
+
92
+ start_method = sys.argv[1]
93
+ set_start_method(start_method)
94
+ results = []
95
+ with Pool(5) as pool:
96
+ pool.map_async(int, [1, 4, 9], callback=results.extend)
97
+ start_time = getattr(time,'monotonic',time.time)()
98
+ while not results:
99
+ time.sleep(0.05)
100
+ # up to 1 min to report the results
101
+ dt = getattr(time,'monotonic',time.time)() - start_time
102
+ if dt > 60.0:
103
+ raise RuntimeError("Timed out waiting for results (%.1f sec)" % dt)
104
+
105
+ results.sort()
106
+ print(start_method, "->", results)
107
+
108
+ pool.join()
109
+ """.format(sys.path)
110
+
111
+ # These helpers were copied from test_cmd_line_script & tweaked a bit...
112
+
113
+ def _make_test_script(script_dir, script_basename,
114
+ source=test_source, omit_suffix=False):
115
+ to_return = make_script(script_dir, script_basename,
116
+ source, omit_suffix)
117
+ # Hack to check explicit relative imports
118
+ if script_basename == "check_sibling":
119
+ make_script(script_dir, "sibling", "")
120
+ importlib.invalidate_caches()
121
+ return to_return
122
+
123
+ def _make_test_zip_pkg(zip_dir, zip_basename, pkg_name, script_basename,
124
+ source=test_source, depth=1):
125
+ to_return = make_zip_pkg(zip_dir, zip_basename, pkg_name, script_basename,
126
+ source, depth)
127
+ importlib.invalidate_caches()
128
+ return to_return
129
+
130
+ # There's no easy way to pass the script directory in to get
131
+ # -m to work (avoiding that is the whole point of making
132
+ # directories and zipfiles executable!)
133
+ # So we fake it for testing purposes with a custom launch script
134
+ launch_source = """\
135
+ import sys, os.path, runpy
136
+ sys.path.insert(0, %s)
137
+ runpy._run_module_as_main(%r)
138
+ """
139
+
140
+ def _make_launch_script(script_dir, script_basename, module_name, path=None):
141
+ if path is None:
142
+ path = "os.path.dirname(__file__)"
143
+ else:
144
+ path = repr(path)
145
+ source = launch_source % (path, module_name)
146
+ to_return = make_script(script_dir, script_basename, source)
147
+ importlib.invalidate_caches()
148
+ return to_return
149
+
150
+ class MultiProcessingCmdLineMixin():
151
+ maxDiff = None # Show full tracebacks on subprocess failure
152
+
153
+ def setUp(self):
154
+ if self.start_method not in AVAILABLE_START_METHODS:
155
+ self.skipTest("%r start method not available" % self.start_method)
156
+
157
+ def _check_output(self, script_name, exit_code, out, err):
158
+ if verbose > 1:
159
+ print("Output from test script %r:" % script_name)
160
+ print(repr(out))
161
+ self.assertEqual(exit_code, 0)
162
+ self.assertEqual(err.decode('utf-8'), '')
163
+ expected_results = "%s -> [1, 4, 9]" % self.start_method
164
+ self.assertEqual(out.decode('utf-8').strip(), expected_results)
165
+
166
+ def _check_script(self, script_name, *cmd_line_switches):
167
+ if not __debug__:
168
+ cmd_line_switches += ('-' + 'O' * sys.flags.optimize,)
169
+ run_args = cmd_line_switches + (script_name, self.start_method)
170
+ rc, out, err = assert_python_ok(*run_args, __isolated=False)
171
+ self._check_output(script_name, rc, out, err)
172
+
173
+ def test_basic_script(self):
174
+ with os_helper.temp_dir() as script_dir:
175
+ script_name = _make_test_script(script_dir, 'script')
176
+ self._check_script(script_name)
177
+
178
+ def test_basic_script_no_suffix(self):
179
+ with os_helper.temp_dir() as script_dir:
180
+ script_name = _make_test_script(script_dir, 'script',
181
+ omit_suffix=True)
182
+ self._check_script(script_name)
183
+
184
+ def test_ipython_workaround(self):
185
+ # Some versions of the IPython launch script are missing the
186
+ # __name__ = "__main__" guard, and multiprocessing has long had
187
+ # a workaround for that case
188
+ # See https://github.com/ipython/ipython/issues/4698
189
+ source = test_source_main_skipped_in_children
190
+ with os_helper.temp_dir() as script_dir:
191
+ script_name = _make_test_script(script_dir, 'ipython',
192
+ source=source)
193
+ self._check_script(script_name)
194
+ script_no_suffix = _make_test_script(script_dir, 'ipython',
195
+ source=source,
196
+ omit_suffix=True)
197
+ self._check_script(script_no_suffix)
198
+
199
+ def test_script_compiled(self):
200
+ with os_helper.temp_dir() as script_dir:
201
+ script_name = _make_test_script(script_dir, 'script')
202
+ py_compile.compile(script_name, doraise=True)
203
+ os.remove(script_name)
204
+ pyc_file = import_helper.make_legacy_pyc(script_name)
205
+ self._check_script(pyc_file)
206
+
207
+ def test_directory(self):
208
+ source = self.main_in_children_source
209
+ with os_helper.temp_dir() as script_dir:
210
+ script_name = _make_test_script(script_dir, '__main__',
211
+ source=source)
212
+ self._check_script(script_dir)
213
+
214
+ def test_directory_compiled(self):
215
+ source = self.main_in_children_source
216
+ with os_helper.temp_dir() as script_dir:
217
+ script_name = _make_test_script(script_dir, '__main__',
218
+ source=source)
219
+ py_compile.compile(script_name, doraise=True)
220
+ os.remove(script_name)
221
+ pyc_file = import_helper.make_legacy_pyc(script_name)
222
+ self._check_script(script_dir)
223
+
224
+ def test_zipfile(self):
225
+ source = self.main_in_children_source
226
+ with os_helper.temp_dir() as script_dir:
227
+ script_name = _make_test_script(script_dir, '__main__',
228
+ source=source)
229
+ zip_name, run_name = make_zip_script(script_dir, 'test_zip', script_name)
230
+ self._check_script(zip_name)
231
+
232
+ def test_zipfile_compiled(self):
233
+ source = self.main_in_children_source
234
+ with os_helper.temp_dir() as script_dir:
235
+ script_name = _make_test_script(script_dir, '__main__',
236
+ source=source)
237
+ compiled_name = py_compile.compile(script_name, doraise=True)
238
+ zip_name, run_name = make_zip_script(script_dir, 'test_zip', compiled_name)
239
+ self._check_script(zip_name)
240
+
241
+ def test_module_in_package(self):
242
+ with os_helper.temp_dir() as script_dir:
243
+ pkg_dir = os.path.join(script_dir, 'test_pkg')
244
+ make_pkg(pkg_dir)
245
+ script_name = _make_test_script(pkg_dir, 'check_sibling')
246
+ launch_name = _make_launch_script(script_dir, 'launch',
247
+ 'test_pkg.check_sibling')
248
+ self._check_script(launch_name)
249
+
250
+ def test_module_in_package_in_zipfile(self):
251
+ with os_helper.temp_dir() as script_dir:
252
+ zip_name, run_name = _make_test_zip_pkg(script_dir, 'test_zip', 'test_pkg', 'script')
253
+ launch_name = _make_launch_script(script_dir, 'launch', 'test_pkg.script', zip_name)
254
+ self._check_script(launch_name)
255
+
256
+ def test_module_in_subpackage_in_zipfile(self):
257
+ with os_helper.temp_dir() as script_dir:
258
+ zip_name, run_name = _make_test_zip_pkg(script_dir, 'test_zip', 'test_pkg', 'script', depth=2)
259
+ launch_name = _make_launch_script(script_dir, 'launch', 'test_pkg.test_pkg.script', zip_name)
260
+ self._check_script(launch_name)
261
+
262
+ def test_package(self):
263
+ source = self.main_in_children_source
264
+ with os_helper.temp_dir() as script_dir:
265
+ pkg_dir = os.path.join(script_dir, 'test_pkg')
266
+ make_pkg(pkg_dir)
267
+ script_name = _make_test_script(pkg_dir, '__main__',
268
+ source=source)
269
+ launch_name = _make_launch_script(script_dir, 'launch', 'test_pkg')
270
+ self._check_script(launch_name)
271
+
272
+ def test_package_compiled(self):
273
+ source = self.main_in_children_source
274
+ with os_helper.temp_dir() as script_dir:
275
+ pkg_dir = os.path.join(script_dir, 'test_pkg')
276
+ make_pkg(pkg_dir)
277
+ script_name = _make_test_script(pkg_dir, '__main__',
278
+ source=source)
279
+ compiled_name = py_compile.compile(script_name, doraise=True)
280
+ os.remove(script_name)
281
+ pyc_file = import_helper.make_legacy_pyc(script_name)
282
+ launch_name = _make_launch_script(script_dir, 'launch', 'test_pkg')
283
+ self._check_script(launch_name)
284
+
285
+ # Test all supported start methods (setupClass skips as appropriate)
286
+
287
+ class SpawnCmdLineTest(MultiProcessingCmdLineMixin, unittest.TestCase):
288
+ start_method = 'spawn'
289
+ main_in_children_source = test_source_main_skipped_in_children
290
+
291
+ class ForkCmdLineTest(MultiProcessingCmdLineMixin, unittest.TestCase):
292
+ start_method = 'fork'
293
+ main_in_children_source = test_source
294
+
295
+ class ForkServerCmdLineTest(MultiProcessingCmdLineMixin, unittest.TestCase):
296
+ start_method = 'forkserver'
297
+ main_in_children_source = test_source_main_skipped_in_children
298
+
299
+ def tearDownModule():
300
+ support.reap_children()
301
+
302
+ if __name__ == '__main__':
303
+ unittest.main()
env-llmeval/lib/python3.10/site-packages/multiprocess/tests/test_multiprocessing_spawn.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import unittest
2
+ from multiprocess.tests import install_tests_in_module_dict
3
+
4
+ from test import support
5
+
6
+ if support.PGO:
7
+ raise unittest.SkipTest("test is not helpful for PGO")
8
+
9
+ install_tests_in_module_dict(globals(), 'spawn')
10
+
11
+ if __name__ == '__main__':
12
+ unittest.main()
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (15.7 kB). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/_compute_docstrings.cpython-310.pyc ADDED
Binary file (1.07 kB). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/_generated_version.cpython-310.pyc ADDED
Binary file (278 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/acero.cpython-310.pyc ADDED
Binary file (6.77 kB). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/benchmark.cpython-310.pyc ADDED
Binary file (237 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/cffi.cpython-310.pyc ADDED
Binary file (1.51 kB). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/compute.cpython-310.pyc ADDED
Binary file (19.2 kB). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/conftest.cpython-310.pyc ADDED
Binary file (5.35 kB). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/csv.cpython-310.pyc ADDED
Binary file (432 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/cuda.cpython-310.pyc ADDED
Binary file (434 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/dataset.cpython-310.pyc ADDED
Binary file (32.9 kB). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/feather.cpython-310.pyc ADDED
Binary file (8.19 kB). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/filesystem.cpython-310.pyc ADDED
Binary file (14.1 kB). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/flight.cpython-310.pyc ADDED
Binary file (1.58 kB). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/fs.cpython-310.pyc ADDED
Binary file (11.6 kB). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/hdfs.cpython-310.pyc ADDED
Binary file (7.03 kB). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/ipc.cpython-310.pyc ADDED
Binary file (9.05 kB). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/json.cpython-310.pyc ADDED
Binary file (260 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/jvm.cpython-310.pyc ADDED
Binary file (8.08 kB). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/orc.cpython-310.pyc ADDED
Binary file (11.6 kB). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/pandas_compat.cpython-310.pyc ADDED
Binary file (27.7 kB). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/substrait.cpython-310.pyc ADDED
Binary file (520 Bytes). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/types.cpython-310.pyc ADDED
Binary file (8.07 kB). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/__pycache__/util.cpython-310.pyc ADDED
Binary file (5.92 kB). View file
 
env-llmeval/lib/python3.10/site-packages/pyarrow/tests/__pycache__/test_scalars.cpython-310.pyc ADDED
Binary file (23.2 kB). View file