Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- llmeval-env/bin/Activate.ps1 +247 -0
- llmeval-env/bin/accelerate +8 -0
- llmeval-env/bin/accelerate-config +8 -0
- llmeval-env/bin/accelerate-estimate-memory +8 -0
- llmeval-env/bin/accelerate-launch +8 -0
- llmeval-env/bin/activate +69 -0
- llmeval-env/bin/activate.csh +26 -0
- llmeval-env/bin/activate.fish +69 -0
- llmeval-env/bin/chardetect +8 -0
- llmeval-env/bin/convert-caffe2-to-onnx +8 -0
- llmeval-env/bin/convert-onnx-to-caffe2 +8 -0
- llmeval-env/bin/datasets-cli +8 -0
- llmeval-env/bin/evaluate-cli +8 -0
- llmeval-env/bin/f2py +8 -0
- llmeval-env/bin/get_gprof +75 -0
- llmeval-env/bin/get_objgraph +54 -0
- llmeval-env/bin/huggingface-cli +8 -0
- llmeval-env/bin/isympy +8 -0
- llmeval-env/bin/lm-eval +8 -0
- llmeval-env/bin/lm_eval +8 -0
- llmeval-env/bin/nltk +8 -0
- llmeval-env/bin/normalizer +8 -0
- llmeval-env/bin/pip +8 -0
- llmeval-env/bin/pip3 +8 -0
- llmeval-env/bin/pip3.10 +8 -0
- llmeval-env/bin/pybind11-config +8 -0
- llmeval-env/bin/sacrebleu +8 -0
- llmeval-env/bin/tabulate +8 -0
- llmeval-env/bin/torchrun +8 -0
- llmeval-env/bin/tqdm +8 -0
- llmeval-env/bin/transformers-cli +8 -0
- llmeval-env/bin/undill +22 -0
- llmeval-env/lib/python3.10/site-packages/dateutil/__pycache__/__init__.cpython-310.pyc +0 -0
- llmeval-env/lib/python3.10/site-packages/dateutil/__pycache__/_common.cpython-310.pyc +0 -0
- llmeval-env/lib/python3.10/site-packages/dateutil/__pycache__/_version.cpython-310.pyc +0 -0
- llmeval-env/lib/python3.10/site-packages/dateutil/__pycache__/easter.cpython-310.pyc +0 -0
- llmeval-env/lib/python3.10/site-packages/dateutil/__pycache__/relativedelta.cpython-310.pyc +0 -0
- llmeval-env/lib/python3.10/site-packages/dateutil/__pycache__/rrule.cpython-310.pyc +0 -0
- llmeval-env/lib/python3.10/site-packages/dateutil/__pycache__/tzwin.cpython-310.pyc +0 -0
- llmeval-env/lib/python3.10/site-packages/dateutil/__pycache__/utils.cpython-310.pyc +0 -0
- llmeval-env/lib/python3.10/site-packages/dateutil/_version.py +4 -0
- llmeval-env/lib/python3.10/site-packages/dateutil/rrule.py +1737 -0
- llmeval-env/lib/python3.10/site-packages/isympy.py +342 -0
- llmeval-env/lib/python3.10/site-packages/pip/py.typed +4 -0
- llmeval-env/lib/python3.10/site-packages/sqlitedict.py +697 -0
- llmeval-env/lib/python3.10/site-packages/threadpoolctl.py +1280 -0
- llmeval-env/lib/python3.10/site-packages/tqdm-4.66.4.dist-info/INSTALLER +1 -0
- llmeval-env/lib/python3.10/site-packages/tqdm-4.66.4.dist-info/METADATA +1590 -0
- llmeval-env/lib/python3.10/site-packages/tqdm-4.66.4.dist-info/WHEEL +5 -0
- llmeval-env/lib/python3.10/site-packages/tqdm-4.66.4.dist-info/entry_points.txt +2 -0
llmeval-env/bin/Activate.ps1
ADDED
@@ -0,0 +1,247 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<#
|
2 |
+
.Synopsis
|
3 |
+
Activate a Python virtual environment for the current PowerShell session.
|
4 |
+
|
5 |
+
.Description
|
6 |
+
Pushes the python executable for a virtual environment to the front of the
|
7 |
+
$Env:PATH environment variable and sets the prompt to signify that you are
|
8 |
+
in a Python virtual environment. Makes use of the command line switches as
|
9 |
+
well as the `pyvenv.cfg` file values present in the virtual environment.
|
10 |
+
|
11 |
+
.Parameter VenvDir
|
12 |
+
Path to the directory that contains the virtual environment to activate. The
|
13 |
+
default value for this is the parent of the directory that the Activate.ps1
|
14 |
+
script is located within.
|
15 |
+
|
16 |
+
.Parameter Prompt
|
17 |
+
The prompt prefix to display when this virtual environment is activated. By
|
18 |
+
default, this prompt is the name of the virtual environment folder (VenvDir)
|
19 |
+
surrounded by parentheses and followed by a single space (ie. '(.venv) ').
|
20 |
+
|
21 |
+
.Example
|
22 |
+
Activate.ps1
|
23 |
+
Activates the Python virtual environment that contains the Activate.ps1 script.
|
24 |
+
|
25 |
+
.Example
|
26 |
+
Activate.ps1 -Verbose
|
27 |
+
Activates the Python virtual environment that contains the Activate.ps1 script,
|
28 |
+
and shows extra information about the activation as it executes.
|
29 |
+
|
30 |
+
.Example
|
31 |
+
Activate.ps1 -VenvDir C:\Users\MyUser\Common\.venv
|
32 |
+
Activates the Python virtual environment located in the specified location.
|
33 |
+
|
34 |
+
.Example
|
35 |
+
Activate.ps1 -Prompt "MyPython"
|
36 |
+
Activates the Python virtual environment that contains the Activate.ps1 script,
|
37 |
+
and prefixes the current prompt with the specified string (surrounded in
|
38 |
+
parentheses) while the virtual environment is active.
|
39 |
+
|
40 |
+
.Notes
|
41 |
+
On Windows, it may be required to enable this Activate.ps1 script by setting the
|
42 |
+
execution policy for the user. You can do this by issuing the following PowerShell
|
43 |
+
command:
|
44 |
+
|
45 |
+
PS C:\> Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
|
46 |
+
|
47 |
+
For more information on Execution Policies:
|
48 |
+
https://go.microsoft.com/fwlink/?LinkID=135170
|
49 |
+
|
50 |
+
#>
|
51 |
+
Param(
|
52 |
+
[Parameter(Mandatory = $false)]
|
53 |
+
[String]
|
54 |
+
$VenvDir,
|
55 |
+
[Parameter(Mandatory = $false)]
|
56 |
+
[String]
|
57 |
+
$Prompt
|
58 |
+
)
|
59 |
+
|
60 |
+
<# Function declarations --------------------------------------------------- #>
|
61 |
+
|
62 |
+
<#
|
63 |
+
.Synopsis
|
64 |
+
Remove all shell session elements added by the Activate script, including the
|
65 |
+
addition of the virtual environment's Python executable from the beginning of
|
66 |
+
the PATH variable.
|
67 |
+
|
68 |
+
.Parameter NonDestructive
|
69 |
+
If present, do not remove this function from the global namespace for the
|
70 |
+
session.
|
71 |
+
|
72 |
+
#>
|
73 |
+
function global:deactivate ([switch]$NonDestructive) {
|
74 |
+
# Revert to original values
|
75 |
+
|
76 |
+
# The prior prompt:
|
77 |
+
if (Test-Path -Path Function:_OLD_VIRTUAL_PROMPT) {
|
78 |
+
Copy-Item -Path Function:_OLD_VIRTUAL_PROMPT -Destination Function:prompt
|
79 |
+
Remove-Item -Path Function:_OLD_VIRTUAL_PROMPT
|
80 |
+
}
|
81 |
+
|
82 |
+
# The prior PYTHONHOME:
|
83 |
+
if (Test-Path -Path Env:_OLD_VIRTUAL_PYTHONHOME) {
|
84 |
+
Copy-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME -Destination Env:PYTHONHOME
|
85 |
+
Remove-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME
|
86 |
+
}
|
87 |
+
|
88 |
+
# The prior PATH:
|
89 |
+
if (Test-Path -Path Env:_OLD_VIRTUAL_PATH) {
|
90 |
+
Copy-Item -Path Env:_OLD_VIRTUAL_PATH -Destination Env:PATH
|
91 |
+
Remove-Item -Path Env:_OLD_VIRTUAL_PATH
|
92 |
+
}
|
93 |
+
|
94 |
+
# Just remove the VIRTUAL_ENV altogether:
|
95 |
+
if (Test-Path -Path Env:VIRTUAL_ENV) {
|
96 |
+
Remove-Item -Path env:VIRTUAL_ENV
|
97 |
+
}
|
98 |
+
|
99 |
+
# Just remove VIRTUAL_ENV_PROMPT altogether.
|
100 |
+
if (Test-Path -Path Env:VIRTUAL_ENV_PROMPT) {
|
101 |
+
Remove-Item -Path env:VIRTUAL_ENV_PROMPT
|
102 |
+
}
|
103 |
+
|
104 |
+
# Just remove the _PYTHON_VENV_PROMPT_PREFIX altogether:
|
105 |
+
if (Get-Variable -Name "_PYTHON_VENV_PROMPT_PREFIX" -ErrorAction SilentlyContinue) {
|
106 |
+
Remove-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Scope Global -Force
|
107 |
+
}
|
108 |
+
|
109 |
+
# Leave deactivate function in the global namespace if requested:
|
110 |
+
if (-not $NonDestructive) {
|
111 |
+
Remove-Item -Path function:deactivate
|
112 |
+
}
|
113 |
+
}
|
114 |
+
|
115 |
+
<#
|
116 |
+
.Description
|
117 |
+
Get-PyVenvConfig parses the values from the pyvenv.cfg file located in the
|
118 |
+
given folder, and returns them in a map.
|
119 |
+
|
120 |
+
For each line in the pyvenv.cfg file, if that line can be parsed into exactly
|
121 |
+
two strings separated by `=` (with any amount of whitespace surrounding the =)
|
122 |
+
then it is considered a `key = value` line. The left hand string is the key,
|
123 |
+
the right hand is the value.
|
124 |
+
|
125 |
+
If the value starts with a `'` or a `"` then the first and last character is
|
126 |
+
stripped from the value before being captured.
|
127 |
+
|
128 |
+
.Parameter ConfigDir
|
129 |
+
Path to the directory that contains the `pyvenv.cfg` file.
|
130 |
+
#>
|
131 |
+
function Get-PyVenvConfig(
|
132 |
+
[String]
|
133 |
+
$ConfigDir
|
134 |
+
) {
|
135 |
+
Write-Verbose "Given ConfigDir=$ConfigDir, obtain values in pyvenv.cfg"
|
136 |
+
|
137 |
+
# Ensure the file exists, and issue a warning if it doesn't (but still allow the function to continue).
|
138 |
+
$pyvenvConfigPath = Join-Path -Resolve -Path $ConfigDir -ChildPath 'pyvenv.cfg' -ErrorAction Continue
|
139 |
+
|
140 |
+
# An empty map will be returned if no config file is found.
|
141 |
+
$pyvenvConfig = @{ }
|
142 |
+
|
143 |
+
if ($pyvenvConfigPath) {
|
144 |
+
|
145 |
+
Write-Verbose "File exists, parse `key = value` lines"
|
146 |
+
$pyvenvConfigContent = Get-Content -Path $pyvenvConfigPath
|
147 |
+
|
148 |
+
$pyvenvConfigContent | ForEach-Object {
|
149 |
+
$keyval = $PSItem -split "\s*=\s*", 2
|
150 |
+
if ($keyval[0] -and $keyval[1]) {
|
151 |
+
$val = $keyval[1]
|
152 |
+
|
153 |
+
# Remove extraneous quotations around a string value.
|
154 |
+
if ("'""".Contains($val.Substring(0, 1))) {
|
155 |
+
$val = $val.Substring(1, $val.Length - 2)
|
156 |
+
}
|
157 |
+
|
158 |
+
$pyvenvConfig[$keyval[0]] = $val
|
159 |
+
Write-Verbose "Adding Key: '$($keyval[0])'='$val'"
|
160 |
+
}
|
161 |
+
}
|
162 |
+
}
|
163 |
+
return $pyvenvConfig
|
164 |
+
}
|
165 |
+
|
166 |
+
|
167 |
+
<# Begin Activate script --------------------------------------------------- #>
|
168 |
+
|
169 |
+
# Determine the containing directory of this script
|
170 |
+
$VenvExecPath = Split-Path -Parent $MyInvocation.MyCommand.Definition
|
171 |
+
$VenvExecDir = Get-Item -Path $VenvExecPath
|
172 |
+
|
173 |
+
Write-Verbose "Activation script is located in path: '$VenvExecPath'"
|
174 |
+
Write-Verbose "VenvExecDir Fullname: '$($VenvExecDir.FullName)"
|
175 |
+
Write-Verbose "VenvExecDir Name: '$($VenvExecDir.Name)"
|
176 |
+
|
177 |
+
# Set values required in priority: CmdLine, ConfigFile, Default
|
178 |
+
# First, get the location of the virtual environment, it might not be
|
179 |
+
# VenvExecDir if specified on the command line.
|
180 |
+
if ($VenvDir) {
|
181 |
+
Write-Verbose "VenvDir given as parameter, using '$VenvDir' to determine values"
|
182 |
+
}
|
183 |
+
else {
|
184 |
+
Write-Verbose "VenvDir not given as a parameter, using parent directory name as VenvDir."
|
185 |
+
$VenvDir = $VenvExecDir.Parent.FullName.TrimEnd("\\/")
|
186 |
+
Write-Verbose "VenvDir=$VenvDir"
|
187 |
+
}
|
188 |
+
|
189 |
+
# Next, read the `pyvenv.cfg` file to determine any required value such
|
190 |
+
# as `prompt`.
|
191 |
+
$pyvenvCfg = Get-PyVenvConfig -ConfigDir $VenvDir
|
192 |
+
|
193 |
+
# Next, set the prompt from the command line, or the config file, or
|
194 |
+
# just use the name of the virtual environment folder.
|
195 |
+
if ($Prompt) {
|
196 |
+
Write-Verbose "Prompt specified as argument, using '$Prompt'"
|
197 |
+
}
|
198 |
+
else {
|
199 |
+
Write-Verbose "Prompt not specified as argument to script, checking pyvenv.cfg value"
|
200 |
+
if ($pyvenvCfg -and $pyvenvCfg['prompt']) {
|
201 |
+
Write-Verbose " Setting based on value in pyvenv.cfg='$($pyvenvCfg['prompt'])'"
|
202 |
+
$Prompt = $pyvenvCfg['prompt'];
|
203 |
+
}
|
204 |
+
else {
|
205 |
+
Write-Verbose " Setting prompt based on parent's directory's name. (Is the directory name passed to venv module when creating the virtual environment)"
|
206 |
+
Write-Verbose " Got leaf-name of $VenvDir='$(Split-Path -Path $venvDir -Leaf)'"
|
207 |
+
$Prompt = Split-Path -Path $venvDir -Leaf
|
208 |
+
}
|
209 |
+
}
|
210 |
+
|
211 |
+
Write-Verbose "Prompt = '$Prompt'"
|
212 |
+
Write-Verbose "VenvDir='$VenvDir'"
|
213 |
+
|
214 |
+
# Deactivate any currently active virtual environment, but leave the
|
215 |
+
# deactivate function in place.
|
216 |
+
deactivate -nondestructive
|
217 |
+
|
218 |
+
# Now set the environment variable VIRTUAL_ENV, used by many tools to determine
|
219 |
+
# that there is an activated venv.
|
220 |
+
$env:VIRTUAL_ENV = $VenvDir
|
221 |
+
|
222 |
+
if (-not $Env:VIRTUAL_ENV_DISABLE_PROMPT) {
|
223 |
+
|
224 |
+
Write-Verbose "Setting prompt to '$Prompt'"
|
225 |
+
|
226 |
+
# Set the prompt to include the env name
|
227 |
+
# Make sure _OLD_VIRTUAL_PROMPT is global
|
228 |
+
function global:_OLD_VIRTUAL_PROMPT { "" }
|
229 |
+
Copy-Item -Path function:prompt -Destination function:_OLD_VIRTUAL_PROMPT
|
230 |
+
New-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Description "Python virtual environment prompt prefix" -Scope Global -Option ReadOnly -Visibility Public -Value $Prompt
|
231 |
+
|
232 |
+
function global:prompt {
|
233 |
+
Write-Host -NoNewline -ForegroundColor Green "($_PYTHON_VENV_PROMPT_PREFIX) "
|
234 |
+
_OLD_VIRTUAL_PROMPT
|
235 |
+
}
|
236 |
+
$env:VIRTUAL_ENV_PROMPT = $Prompt
|
237 |
+
}
|
238 |
+
|
239 |
+
# Clear PYTHONHOME
|
240 |
+
if (Test-Path -Path Env:PYTHONHOME) {
|
241 |
+
Copy-Item -Path Env:PYTHONHOME -Destination Env:_OLD_VIRTUAL_PYTHONHOME
|
242 |
+
Remove-Item -Path Env:PYTHONHOME
|
243 |
+
}
|
244 |
+
|
245 |
+
# Add the venv to the PATH
|
246 |
+
Copy-Item -Path Env:PATH -Destination Env:_OLD_VIRTUAL_PATH
|
247 |
+
$Env:PATH = "$VenvExecDir$([System.IO.Path]::PathSeparator)$Env:PATH"
|
llmeval-env/bin/accelerate
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from accelerate.commands.accelerate_cli import main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(main())
|
llmeval-env/bin/accelerate-config
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from accelerate.commands.config import main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(main())
|
llmeval-env/bin/accelerate-estimate-memory
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from accelerate.commands.estimate import main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(main())
|
llmeval-env/bin/accelerate-launch
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from accelerate.commands.launch import main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(main())
|
llmeval-env/bin/activate
ADDED
@@ -0,0 +1,69 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# This file must be used with "source bin/activate" *from bash*
|
2 |
+
# you cannot run it directly
|
3 |
+
|
4 |
+
deactivate () {
|
5 |
+
# reset old environment variables
|
6 |
+
if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then
|
7 |
+
PATH="${_OLD_VIRTUAL_PATH:-}"
|
8 |
+
export PATH
|
9 |
+
unset _OLD_VIRTUAL_PATH
|
10 |
+
fi
|
11 |
+
if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then
|
12 |
+
PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}"
|
13 |
+
export PYTHONHOME
|
14 |
+
unset _OLD_VIRTUAL_PYTHONHOME
|
15 |
+
fi
|
16 |
+
|
17 |
+
# This should detect bash and zsh, which have a hash command that must
|
18 |
+
# be called to get it to forget past commands. Without forgetting
|
19 |
+
# past commands the $PATH changes we made may not be respected
|
20 |
+
if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then
|
21 |
+
hash -r 2> /dev/null
|
22 |
+
fi
|
23 |
+
|
24 |
+
if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then
|
25 |
+
PS1="${_OLD_VIRTUAL_PS1:-}"
|
26 |
+
export PS1
|
27 |
+
unset _OLD_VIRTUAL_PS1
|
28 |
+
fi
|
29 |
+
|
30 |
+
unset VIRTUAL_ENV
|
31 |
+
unset VIRTUAL_ENV_PROMPT
|
32 |
+
if [ ! "${1:-}" = "nondestructive" ] ; then
|
33 |
+
# Self destruct!
|
34 |
+
unset -f deactivate
|
35 |
+
fi
|
36 |
+
}
|
37 |
+
|
38 |
+
# unset irrelevant variables
|
39 |
+
deactivate nondestructive
|
40 |
+
|
41 |
+
VIRTUAL_ENV="/mnt/weka/peacock/llm_eval/llmeval-env"
|
42 |
+
export VIRTUAL_ENV
|
43 |
+
|
44 |
+
_OLD_VIRTUAL_PATH="$PATH"
|
45 |
+
PATH="$VIRTUAL_ENV/bin:$PATH"
|
46 |
+
export PATH
|
47 |
+
|
48 |
+
# unset PYTHONHOME if set
|
49 |
+
# this will fail if PYTHONHOME is set to the empty string (which is bad anyway)
|
50 |
+
# could use `if (set -u; : $PYTHONHOME) ;` in bash
|
51 |
+
if [ -n "${PYTHONHOME:-}" ] ; then
|
52 |
+
_OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}"
|
53 |
+
unset PYTHONHOME
|
54 |
+
fi
|
55 |
+
|
56 |
+
if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then
|
57 |
+
_OLD_VIRTUAL_PS1="${PS1:-}"
|
58 |
+
PS1="(llmeval-env) ${PS1:-}"
|
59 |
+
export PS1
|
60 |
+
VIRTUAL_ENV_PROMPT="(llmeval-env) "
|
61 |
+
export VIRTUAL_ENV_PROMPT
|
62 |
+
fi
|
63 |
+
|
64 |
+
# This should detect bash and zsh, which have a hash command that must
|
65 |
+
# be called to get it to forget past commands. Without forgetting
|
66 |
+
# past commands the $PATH changes we made may not be respected
|
67 |
+
if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then
|
68 |
+
hash -r 2> /dev/null
|
69 |
+
fi
|
llmeval-env/bin/activate.csh
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# This file must be used with "source bin/activate.csh" *from csh*.
|
2 |
+
# You cannot run it directly.
|
3 |
+
# Created by Davide Di Blasi <[email protected]>.
|
4 |
+
# Ported to Python 3.3 venv by Andrew Svetlov <[email protected]>
|
5 |
+
|
6 |
+
alias deactivate 'test $?_OLD_VIRTUAL_PATH != 0 && setenv PATH "$_OLD_VIRTUAL_PATH" && unset _OLD_VIRTUAL_PATH; rehash; test $?_OLD_VIRTUAL_PROMPT != 0 && set prompt="$_OLD_VIRTUAL_PROMPT" && unset _OLD_VIRTUAL_PROMPT; unsetenv VIRTUAL_ENV; unsetenv VIRTUAL_ENV_PROMPT; test "\!:*" != "nondestructive" && unalias deactivate'
|
7 |
+
|
8 |
+
# Unset irrelevant variables.
|
9 |
+
deactivate nondestructive
|
10 |
+
|
11 |
+
setenv VIRTUAL_ENV "/mnt/weka/peacock/llm_eval/llmeval-env"
|
12 |
+
|
13 |
+
set _OLD_VIRTUAL_PATH="$PATH"
|
14 |
+
setenv PATH "$VIRTUAL_ENV/bin:$PATH"
|
15 |
+
|
16 |
+
|
17 |
+
set _OLD_VIRTUAL_PROMPT="$prompt"
|
18 |
+
|
19 |
+
if (! "$?VIRTUAL_ENV_DISABLE_PROMPT") then
|
20 |
+
set prompt = "(llmeval-env) $prompt"
|
21 |
+
setenv VIRTUAL_ENV_PROMPT "(llmeval-env) "
|
22 |
+
endif
|
23 |
+
|
24 |
+
alias pydoc python -m pydoc
|
25 |
+
|
26 |
+
rehash
|
llmeval-env/bin/activate.fish
ADDED
@@ -0,0 +1,69 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# This file must be used with "source <venv>/bin/activate.fish" *from fish*
|
2 |
+
# (https://fishshell.com/); you cannot run it directly.
|
3 |
+
|
4 |
+
function deactivate -d "Exit virtual environment and return to normal shell environment"
|
5 |
+
# reset old environment variables
|
6 |
+
if test -n "$_OLD_VIRTUAL_PATH"
|
7 |
+
set -gx PATH $_OLD_VIRTUAL_PATH
|
8 |
+
set -e _OLD_VIRTUAL_PATH
|
9 |
+
end
|
10 |
+
if test -n "$_OLD_VIRTUAL_PYTHONHOME"
|
11 |
+
set -gx PYTHONHOME $_OLD_VIRTUAL_PYTHONHOME
|
12 |
+
set -e _OLD_VIRTUAL_PYTHONHOME
|
13 |
+
end
|
14 |
+
|
15 |
+
if test -n "$_OLD_FISH_PROMPT_OVERRIDE"
|
16 |
+
set -e _OLD_FISH_PROMPT_OVERRIDE
|
17 |
+
# prevents error when using nested fish instances (Issue #93858)
|
18 |
+
if functions -q _old_fish_prompt
|
19 |
+
functions -e fish_prompt
|
20 |
+
functions -c _old_fish_prompt fish_prompt
|
21 |
+
functions -e _old_fish_prompt
|
22 |
+
end
|
23 |
+
end
|
24 |
+
|
25 |
+
set -e VIRTUAL_ENV
|
26 |
+
set -e VIRTUAL_ENV_PROMPT
|
27 |
+
if test "$argv[1]" != "nondestructive"
|
28 |
+
# Self-destruct!
|
29 |
+
functions -e deactivate
|
30 |
+
end
|
31 |
+
end
|
32 |
+
|
33 |
+
# Unset irrelevant variables.
|
34 |
+
deactivate nondestructive
|
35 |
+
|
36 |
+
set -gx VIRTUAL_ENV "/mnt/weka/peacock/llm_eval/llmeval-env"
|
37 |
+
|
38 |
+
set -gx _OLD_VIRTUAL_PATH $PATH
|
39 |
+
set -gx PATH "$VIRTUAL_ENV/bin" $PATH
|
40 |
+
|
41 |
+
# Unset PYTHONHOME if set.
|
42 |
+
if set -q PYTHONHOME
|
43 |
+
set -gx _OLD_VIRTUAL_PYTHONHOME $PYTHONHOME
|
44 |
+
set -e PYTHONHOME
|
45 |
+
end
|
46 |
+
|
47 |
+
if test -z "$VIRTUAL_ENV_DISABLE_PROMPT"
|
48 |
+
# fish uses a function instead of an env var to generate the prompt.
|
49 |
+
|
50 |
+
# Save the current fish_prompt function as the function _old_fish_prompt.
|
51 |
+
functions -c fish_prompt _old_fish_prompt
|
52 |
+
|
53 |
+
# With the original prompt function renamed, we can override with our own.
|
54 |
+
function fish_prompt
|
55 |
+
# Save the return status of the last command.
|
56 |
+
set -l old_status $status
|
57 |
+
|
58 |
+
# Output the venv prompt; color taken from the blue of the Python logo.
|
59 |
+
printf "%s%s%s" (set_color 4B8BBE) "(llmeval-env) " (set_color normal)
|
60 |
+
|
61 |
+
# Restore the return status of the previous command.
|
62 |
+
echo "exit $old_status" | .
|
63 |
+
# Output the original/"old" prompt.
|
64 |
+
_old_fish_prompt
|
65 |
+
end
|
66 |
+
|
67 |
+
set -gx _OLD_FISH_PROMPT_OVERRIDE "$VIRTUAL_ENV"
|
68 |
+
set -gx VIRTUAL_ENV_PROMPT "(llmeval-env) "
|
69 |
+
end
|
llmeval-env/bin/chardetect
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from chardet.cli.chardetect import main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(main())
|
llmeval-env/bin/convert-caffe2-to-onnx
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from caffe2.python.onnx.bin.conversion import caffe2_to_onnx
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(caffe2_to_onnx())
|
llmeval-env/bin/convert-onnx-to-caffe2
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from caffe2.python.onnx.bin.conversion import onnx_to_caffe2
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(onnx_to_caffe2())
|
llmeval-env/bin/datasets-cli
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from datasets.commands.datasets_cli import main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(main())
|
llmeval-env/bin/evaluate-cli
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from evaluate.commands.evaluate_cli import main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(main())
|
llmeval-env/bin/f2py
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from numpy.f2py.f2py2e import main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(main())
|
llmeval-env/bin/get_gprof
ADDED
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
#
|
3 |
+
# Author: Mike McKerns (mmckerns @caltech and @uqfoundation)
|
4 |
+
# Copyright (c) 2008-2016 California Institute of Technology.
|
5 |
+
# Copyright (c) 2016-2024 The Uncertainty Quantification Foundation.
|
6 |
+
# License: 3-clause BSD. The full license text is available at:
|
7 |
+
# - https://github.com/uqfoundation/dill/blob/master/LICENSE
|
8 |
+
'''
|
9 |
+
build profile graph for the given instance
|
10 |
+
|
11 |
+
running:
|
12 |
+
$ get_gprof <args> <instance>
|
13 |
+
|
14 |
+
executes:
|
15 |
+
gprof2dot -f pstats <args> <type>.prof | dot -Tpng -o <type>.call.png
|
16 |
+
|
17 |
+
where:
|
18 |
+
<args> are arguments for gprof2dot, such as "-n 5 -e 5"
|
19 |
+
<instance> is code to create the instance to profile
|
20 |
+
<type> is the class of the instance (i.e. type(instance))
|
21 |
+
|
22 |
+
For example:
|
23 |
+
$ get_gprof -n 5 -e 1 "import numpy; numpy.array([1,2])"
|
24 |
+
|
25 |
+
will create 'ndarray.call.png' with the profile graph for numpy.array([1,2]),
|
26 |
+
where '-n 5' eliminates nodes below 5% threshold, similarly '-e 1' eliminates
|
27 |
+
edges below 1% threshold
|
28 |
+
'''
|
29 |
+
|
30 |
+
if __name__ == "__main__":
|
31 |
+
import sys
|
32 |
+
if len(sys.argv) < 2:
|
33 |
+
print ("Please provide an object instance (e.g. 'import math; math.pi')")
|
34 |
+
sys.exit()
|
35 |
+
# grab args for gprof2dot
|
36 |
+
args = sys.argv[1:-1]
|
37 |
+
args = ' '.join(args)
|
38 |
+
# last arg builds the object
|
39 |
+
obj = sys.argv[-1]
|
40 |
+
obj = obj.split(';')
|
41 |
+
# multi-line prep for generating an instance
|
42 |
+
for line in obj[:-1]:
|
43 |
+
exec(line)
|
44 |
+
# one-line generation of an instance
|
45 |
+
try:
|
46 |
+
obj = eval(obj[-1])
|
47 |
+
except Exception:
|
48 |
+
print ("Error processing object instance")
|
49 |
+
sys.exit()
|
50 |
+
|
51 |
+
# get object 'name'
|
52 |
+
objtype = type(obj)
|
53 |
+
name = getattr(objtype, '__name__', getattr(objtype, '__class__', objtype))
|
54 |
+
|
55 |
+
# profile dumping an object
|
56 |
+
import dill
|
57 |
+
import os
|
58 |
+
import cProfile
|
59 |
+
#name = os.path.splitext(os.path.basename(__file__))[0]
|
60 |
+
cProfile.run("dill.dumps(obj)", filename="%s.prof" % name)
|
61 |
+
msg = "gprof2dot -f pstats %s %s.prof | dot -Tpng -o %s.call.png" % (args, name, name)
|
62 |
+
try:
|
63 |
+
res = os.system(msg)
|
64 |
+
except Exception:
|
65 |
+
print ("Please verify install of 'gprof2dot' to view profile graphs")
|
66 |
+
if res:
|
67 |
+
print ("Please verify install of 'gprof2dot' to view profile graphs")
|
68 |
+
|
69 |
+
# get stats
|
70 |
+
f_prof = "%s.prof" % name
|
71 |
+
import pstats
|
72 |
+
stats = pstats.Stats(f_prof, stream=sys.stdout)
|
73 |
+
stats.strip_dirs().sort_stats('cumtime')
|
74 |
+
stats.print_stats(20) #XXX: save to file instead of print top 20?
|
75 |
+
os.remove(f_prof)
|
llmeval-env/bin/get_objgraph
ADDED
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
#
|
3 |
+
# Author: Mike McKerns (mmckerns @caltech and @uqfoundation)
|
4 |
+
# Copyright (c) 2008-2016 California Institute of Technology.
|
5 |
+
# Copyright (c) 2016-2024 The Uncertainty Quantification Foundation.
|
6 |
+
# License: 3-clause BSD. The full license text is available at:
|
7 |
+
# - https://github.com/uqfoundation/dill/blob/master/LICENSE
|
8 |
+
"""
|
9 |
+
display the reference paths for objects in ``dill.types`` or a .pkl file
|
10 |
+
|
11 |
+
Notes:
|
12 |
+
the generated image is useful in showing the pointer references in
|
13 |
+
objects that are or can be pickled. Any object in ``dill.objects``
|
14 |
+
listed in ``dill.load_types(picklable=True, unpicklable=True)`` works.
|
15 |
+
|
16 |
+
Examples::
|
17 |
+
|
18 |
+
$ get_objgraph ArrayType
|
19 |
+
Image generated as ArrayType.png
|
20 |
+
"""
|
21 |
+
|
22 |
+
import dill as pickle
|
23 |
+
#pickle.debug.trace(True)
|
24 |
+
#import pickle
|
25 |
+
|
26 |
+
# get all objects for testing
|
27 |
+
from dill import load_types
|
28 |
+
load_types(pickleable=True,unpickleable=True)
|
29 |
+
from dill import objects
|
30 |
+
|
31 |
+
if __name__ == "__main__":
|
32 |
+
import sys
|
33 |
+
if len(sys.argv) != 2:
|
34 |
+
print ("Please provide exactly one file or type name (e.g. 'IntType')")
|
35 |
+
msg = "\n"
|
36 |
+
for objtype in list(objects.keys())[:40]:
|
37 |
+
msg += objtype + ', '
|
38 |
+
print (msg + "...")
|
39 |
+
else:
|
40 |
+
objtype = str(sys.argv[-1])
|
41 |
+
try:
|
42 |
+
obj = objects[objtype]
|
43 |
+
except KeyError:
|
44 |
+
obj = pickle.load(open(objtype,'rb'))
|
45 |
+
import os
|
46 |
+
objtype = os.path.splitext(objtype)[0]
|
47 |
+
try:
|
48 |
+
import objgraph
|
49 |
+
objgraph.show_refs(obj, filename=objtype+'.png')
|
50 |
+
except ImportError:
|
51 |
+
print ("Please install 'objgraph' to view object graphs")
|
52 |
+
|
53 |
+
|
54 |
+
# EOF
|
llmeval-env/bin/huggingface-cli
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from huggingface_hub.commands.huggingface_cli import main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(main())
|
llmeval-env/bin/isympy
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from isympy import main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(main())
|
llmeval-env/bin/lm-eval
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from lm_eval.__main__ import cli_evaluate
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(cli_evaluate())
|
llmeval-env/bin/lm_eval
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from lm_eval.__main__ import cli_evaluate
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(cli_evaluate())
|
llmeval-env/bin/nltk
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from nltk.cli import cli
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(cli())
|
llmeval-env/bin/normalizer
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from charset_normalizer.cli import cli_detect
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(cli_detect())
|
llmeval-env/bin/pip
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from pip._internal.cli.main import main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(main())
|
llmeval-env/bin/pip3
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from pip._internal.cli.main import main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(main())
|
llmeval-env/bin/pip3.10
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from pip._internal.cli.main import main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(main())
|
llmeval-env/bin/pybind11-config
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from pybind11.__main__ import main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(main())
|
llmeval-env/bin/sacrebleu
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from sacrebleu.sacrebleu import main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(main())
|
llmeval-env/bin/tabulate
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from tabulate import _main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(_main())
|
llmeval-env/bin/torchrun
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from torch.distributed.run import main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(main())
|
llmeval-env/bin/tqdm
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from tqdm.cli import main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(main())
|
llmeval-env/bin/transformers-cli
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import re
|
4 |
+
import sys
|
5 |
+
from transformers.commands.transformers_cli import main
|
6 |
+
if __name__ == '__main__':
|
7 |
+
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
8 |
+
sys.exit(main())
|
llmeval-env/bin/undill
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/mnt/weka/peacock/llm_eval/llmeval-env/bin/python3
|
2 |
+
#
|
3 |
+
# Author: Mike McKerns (mmckerns @caltech and @uqfoundation)
|
4 |
+
# Copyright (c) 2008-2016 California Institute of Technology.
|
5 |
+
# Copyright (c) 2016-2024 The Uncertainty Quantification Foundation.
|
6 |
+
# License: 3-clause BSD. The full license text is available at:
|
7 |
+
# - https://github.com/uqfoundation/dill/blob/master/LICENSE
|
8 |
+
"""
|
9 |
+
unpickle the contents of a pickled object file
|
10 |
+
|
11 |
+
Examples::
|
12 |
+
|
13 |
+
$ undill hello.pkl
|
14 |
+
['hello', 'world']
|
15 |
+
"""
|
16 |
+
|
17 |
+
if __name__ == '__main__':
|
18 |
+
import sys
|
19 |
+
import dill
|
20 |
+
for file in sys.argv[1:]:
|
21 |
+
print (dill.load(open(file,'rb')))
|
22 |
+
|
llmeval-env/lib/python3.10/site-packages/dateutil/__pycache__/__init__.cpython-310.pyc
ADDED
Binary file (948 Bytes). View file
|
|
llmeval-env/lib/python3.10/site-packages/dateutil/__pycache__/_common.cpython-310.pyc
ADDED
Binary file (1.43 kB). View file
|
|
llmeval-env/lib/python3.10/site-packages/dateutil/__pycache__/_version.cpython-310.pyc
ADDED
Binary file (282 Bytes). View file
|
|
llmeval-env/lib/python3.10/site-packages/dateutil/__pycache__/easter.cpython-310.pyc
ADDED
Binary file (2.21 kB). View file
|
|
llmeval-env/lib/python3.10/site-packages/dateutil/__pycache__/relativedelta.cpython-310.pyc
ADDED
Binary file (15.7 kB). View file
|
|
llmeval-env/lib/python3.10/site-packages/dateutil/__pycache__/rrule.cpython-310.pyc
ADDED
Binary file (43.3 kB). View file
|
|
llmeval-env/lib/python3.10/site-packages/dateutil/__pycache__/tzwin.cpython-310.pyc
ADDED
Binary file (201 Bytes). View file
|
|
llmeval-env/lib/python3.10/site-packages/dateutil/__pycache__/utils.cpython-310.pyc
ADDED
Binary file (2.27 kB). View file
|
|
llmeval-env/lib/python3.10/site-packages/dateutil/_version.py
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# file generated by setuptools_scm
|
2 |
+
# don't change, don't track in version control
|
3 |
+
__version__ = version = '2.9.0.post0'
|
4 |
+
__version_tuple__ = version_tuple = (2, 9, 0)
|
llmeval-env/lib/python3.10/site-packages/dateutil/rrule.py
ADDED
@@ -0,0 +1,1737 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# -*- coding: utf-8 -*-
|
2 |
+
"""
|
3 |
+
The rrule module offers a small, complete, and very fast, implementation of
|
4 |
+
the recurrence rules documented in the
|
5 |
+
`iCalendar RFC <https://tools.ietf.org/html/rfc5545>`_,
|
6 |
+
including support for caching of results.
|
7 |
+
"""
|
8 |
+
import calendar
|
9 |
+
import datetime
|
10 |
+
import heapq
|
11 |
+
import itertools
|
12 |
+
import re
|
13 |
+
import sys
|
14 |
+
from functools import wraps
|
15 |
+
# For warning about deprecation of until and count
|
16 |
+
from warnings import warn
|
17 |
+
|
18 |
+
from six import advance_iterator, integer_types
|
19 |
+
|
20 |
+
from six.moves import _thread, range
|
21 |
+
|
22 |
+
from ._common import weekday as weekdaybase
|
23 |
+
|
24 |
+
try:
|
25 |
+
from math import gcd
|
26 |
+
except ImportError:
|
27 |
+
from fractions import gcd
|
28 |
+
|
29 |
+
__all__ = ["rrule", "rruleset", "rrulestr",
|
30 |
+
"YEARLY", "MONTHLY", "WEEKLY", "DAILY",
|
31 |
+
"HOURLY", "MINUTELY", "SECONDLY",
|
32 |
+
"MO", "TU", "WE", "TH", "FR", "SA", "SU"]
|
33 |
+
|
34 |
+
# Every mask is 7 days longer to handle cross-year weekly periods.
|
35 |
+
M366MASK = tuple([1]*31+[2]*29+[3]*31+[4]*30+[5]*31+[6]*30 +
|
36 |
+
[7]*31+[8]*31+[9]*30+[10]*31+[11]*30+[12]*31+[1]*7)
|
37 |
+
M365MASK = list(M366MASK)
|
38 |
+
M29, M30, M31 = list(range(1, 30)), list(range(1, 31)), list(range(1, 32))
|
39 |
+
MDAY366MASK = tuple(M31+M29+M31+M30+M31+M30+M31+M31+M30+M31+M30+M31+M31[:7])
|
40 |
+
MDAY365MASK = list(MDAY366MASK)
|
41 |
+
M29, M30, M31 = list(range(-29, 0)), list(range(-30, 0)), list(range(-31, 0))
|
42 |
+
NMDAY366MASK = tuple(M31+M29+M31+M30+M31+M30+M31+M31+M30+M31+M30+M31+M31[:7])
|
43 |
+
NMDAY365MASK = list(NMDAY366MASK)
|
44 |
+
M366RANGE = (0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366)
|
45 |
+
M365RANGE = (0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365)
|
46 |
+
WDAYMASK = [0, 1, 2, 3, 4, 5, 6]*55
|
47 |
+
del M29, M30, M31, M365MASK[59], MDAY365MASK[59], NMDAY365MASK[31]
|
48 |
+
MDAY365MASK = tuple(MDAY365MASK)
|
49 |
+
M365MASK = tuple(M365MASK)
|
50 |
+
|
51 |
+
FREQNAMES = ['YEARLY', 'MONTHLY', 'WEEKLY', 'DAILY', 'HOURLY', 'MINUTELY', 'SECONDLY']
|
52 |
+
|
53 |
+
(YEARLY,
|
54 |
+
MONTHLY,
|
55 |
+
WEEKLY,
|
56 |
+
DAILY,
|
57 |
+
HOURLY,
|
58 |
+
MINUTELY,
|
59 |
+
SECONDLY) = list(range(7))
|
60 |
+
|
61 |
+
# Imported on demand.
|
62 |
+
easter = None
|
63 |
+
parser = None
|
64 |
+
|
65 |
+
|
66 |
+
class weekday(weekdaybase):
|
67 |
+
"""
|
68 |
+
This version of weekday does not allow n = 0.
|
69 |
+
"""
|
70 |
+
def __init__(self, wkday, n=None):
|
71 |
+
if n == 0:
|
72 |
+
raise ValueError("Can't create weekday with n==0")
|
73 |
+
|
74 |
+
super(weekday, self).__init__(wkday, n)
|
75 |
+
|
76 |
+
|
77 |
+
MO, TU, WE, TH, FR, SA, SU = weekdays = tuple(weekday(x) for x in range(7))
|
78 |
+
|
79 |
+
|
80 |
+
def _invalidates_cache(f):
|
81 |
+
"""
|
82 |
+
Decorator for rruleset methods which may invalidate the
|
83 |
+
cached length.
|
84 |
+
"""
|
85 |
+
@wraps(f)
|
86 |
+
def inner_func(self, *args, **kwargs):
|
87 |
+
rv = f(self, *args, **kwargs)
|
88 |
+
self._invalidate_cache()
|
89 |
+
return rv
|
90 |
+
|
91 |
+
return inner_func
|
92 |
+
|
93 |
+
|
94 |
+
class rrulebase(object):
|
95 |
+
def __init__(self, cache=False):
|
96 |
+
if cache:
|
97 |
+
self._cache = []
|
98 |
+
self._cache_lock = _thread.allocate_lock()
|
99 |
+
self._invalidate_cache()
|
100 |
+
else:
|
101 |
+
self._cache = None
|
102 |
+
self._cache_complete = False
|
103 |
+
self._len = None
|
104 |
+
|
105 |
+
def __iter__(self):
|
106 |
+
if self._cache_complete:
|
107 |
+
return iter(self._cache)
|
108 |
+
elif self._cache is None:
|
109 |
+
return self._iter()
|
110 |
+
else:
|
111 |
+
return self._iter_cached()
|
112 |
+
|
113 |
+
def _invalidate_cache(self):
|
114 |
+
if self._cache is not None:
|
115 |
+
self._cache = []
|
116 |
+
self._cache_complete = False
|
117 |
+
self._cache_gen = self._iter()
|
118 |
+
|
119 |
+
if self._cache_lock.locked():
|
120 |
+
self._cache_lock.release()
|
121 |
+
|
122 |
+
self._len = None
|
123 |
+
|
124 |
+
def _iter_cached(self):
|
125 |
+
i = 0
|
126 |
+
gen = self._cache_gen
|
127 |
+
cache = self._cache
|
128 |
+
acquire = self._cache_lock.acquire
|
129 |
+
release = self._cache_lock.release
|
130 |
+
while gen:
|
131 |
+
if i == len(cache):
|
132 |
+
acquire()
|
133 |
+
if self._cache_complete:
|
134 |
+
break
|
135 |
+
try:
|
136 |
+
for j in range(10):
|
137 |
+
cache.append(advance_iterator(gen))
|
138 |
+
except StopIteration:
|
139 |
+
self._cache_gen = gen = None
|
140 |
+
self._cache_complete = True
|
141 |
+
break
|
142 |
+
release()
|
143 |
+
yield cache[i]
|
144 |
+
i += 1
|
145 |
+
while i < self._len:
|
146 |
+
yield cache[i]
|
147 |
+
i += 1
|
148 |
+
|
149 |
+
def __getitem__(self, item):
|
150 |
+
if self._cache_complete:
|
151 |
+
return self._cache[item]
|
152 |
+
elif isinstance(item, slice):
|
153 |
+
if item.step and item.step < 0:
|
154 |
+
return list(iter(self))[item]
|
155 |
+
else:
|
156 |
+
return list(itertools.islice(self,
|
157 |
+
item.start or 0,
|
158 |
+
item.stop or sys.maxsize,
|
159 |
+
item.step or 1))
|
160 |
+
elif item >= 0:
|
161 |
+
gen = iter(self)
|
162 |
+
try:
|
163 |
+
for i in range(item+1):
|
164 |
+
res = advance_iterator(gen)
|
165 |
+
except StopIteration:
|
166 |
+
raise IndexError
|
167 |
+
return res
|
168 |
+
else:
|
169 |
+
return list(iter(self))[item]
|
170 |
+
|
171 |
+
def __contains__(self, item):
|
172 |
+
if self._cache_complete:
|
173 |
+
return item in self._cache
|
174 |
+
else:
|
175 |
+
for i in self:
|
176 |
+
if i == item:
|
177 |
+
return True
|
178 |
+
elif i > item:
|
179 |
+
return False
|
180 |
+
return False
|
181 |
+
|
182 |
+
# __len__() introduces a large performance penalty.
|
183 |
+
def count(self):
|
184 |
+
""" Returns the number of recurrences in this set. It will have go
|
185 |
+
through the whole recurrence, if this hasn't been done before. """
|
186 |
+
if self._len is None:
|
187 |
+
for x in self:
|
188 |
+
pass
|
189 |
+
return self._len
|
190 |
+
|
191 |
+
def before(self, dt, inc=False):
|
192 |
+
""" Returns the last recurrence before the given datetime instance. The
|
193 |
+
inc keyword defines what happens if dt is an occurrence. With
|
194 |
+
inc=True, if dt itself is an occurrence, it will be returned. """
|
195 |
+
if self._cache_complete:
|
196 |
+
gen = self._cache
|
197 |
+
else:
|
198 |
+
gen = self
|
199 |
+
last = None
|
200 |
+
if inc:
|
201 |
+
for i in gen:
|
202 |
+
if i > dt:
|
203 |
+
break
|
204 |
+
last = i
|
205 |
+
else:
|
206 |
+
for i in gen:
|
207 |
+
if i >= dt:
|
208 |
+
break
|
209 |
+
last = i
|
210 |
+
return last
|
211 |
+
|
212 |
+
def after(self, dt, inc=False):
|
213 |
+
""" Returns the first recurrence after the given datetime instance. The
|
214 |
+
inc keyword defines what happens if dt is an occurrence. With
|
215 |
+
inc=True, if dt itself is an occurrence, it will be returned. """
|
216 |
+
if self._cache_complete:
|
217 |
+
gen = self._cache
|
218 |
+
else:
|
219 |
+
gen = self
|
220 |
+
if inc:
|
221 |
+
for i in gen:
|
222 |
+
if i >= dt:
|
223 |
+
return i
|
224 |
+
else:
|
225 |
+
for i in gen:
|
226 |
+
if i > dt:
|
227 |
+
return i
|
228 |
+
return None
|
229 |
+
|
230 |
+
def xafter(self, dt, count=None, inc=False):
|
231 |
+
"""
|
232 |
+
Generator which yields up to `count` recurrences after the given
|
233 |
+
datetime instance, equivalent to `after`.
|
234 |
+
|
235 |
+
:param dt:
|
236 |
+
The datetime at which to start generating recurrences.
|
237 |
+
|
238 |
+
:param count:
|
239 |
+
The maximum number of recurrences to generate. If `None` (default),
|
240 |
+
dates are generated until the recurrence rule is exhausted.
|
241 |
+
|
242 |
+
:param inc:
|
243 |
+
If `dt` is an instance of the rule and `inc` is `True`, it is
|
244 |
+
included in the output.
|
245 |
+
|
246 |
+
:yields: Yields a sequence of `datetime` objects.
|
247 |
+
"""
|
248 |
+
|
249 |
+
if self._cache_complete:
|
250 |
+
gen = self._cache
|
251 |
+
else:
|
252 |
+
gen = self
|
253 |
+
|
254 |
+
# Select the comparison function
|
255 |
+
if inc:
|
256 |
+
comp = lambda dc, dtc: dc >= dtc
|
257 |
+
else:
|
258 |
+
comp = lambda dc, dtc: dc > dtc
|
259 |
+
|
260 |
+
# Generate dates
|
261 |
+
n = 0
|
262 |
+
for d in gen:
|
263 |
+
if comp(d, dt):
|
264 |
+
if count is not None:
|
265 |
+
n += 1
|
266 |
+
if n > count:
|
267 |
+
break
|
268 |
+
|
269 |
+
yield d
|
270 |
+
|
271 |
+
def between(self, after, before, inc=False, count=1):
|
272 |
+
""" Returns all the occurrences of the rrule between after and before.
|
273 |
+
The inc keyword defines what happens if after and/or before are
|
274 |
+
themselves occurrences. With inc=True, they will be included in the
|
275 |
+
list, if they are found in the recurrence set. """
|
276 |
+
if self._cache_complete:
|
277 |
+
gen = self._cache
|
278 |
+
else:
|
279 |
+
gen = self
|
280 |
+
started = False
|
281 |
+
l = []
|
282 |
+
if inc:
|
283 |
+
for i in gen:
|
284 |
+
if i > before:
|
285 |
+
break
|
286 |
+
elif not started:
|
287 |
+
if i >= after:
|
288 |
+
started = True
|
289 |
+
l.append(i)
|
290 |
+
else:
|
291 |
+
l.append(i)
|
292 |
+
else:
|
293 |
+
for i in gen:
|
294 |
+
if i >= before:
|
295 |
+
break
|
296 |
+
elif not started:
|
297 |
+
if i > after:
|
298 |
+
started = True
|
299 |
+
l.append(i)
|
300 |
+
else:
|
301 |
+
l.append(i)
|
302 |
+
return l
|
303 |
+
|
304 |
+
|
305 |
+
class rrule(rrulebase):
|
306 |
+
"""
|
307 |
+
That's the base of the rrule operation. It accepts all the keywords
|
308 |
+
defined in the RFC as its constructor parameters (except byday,
|
309 |
+
which was renamed to byweekday) and more. The constructor prototype is::
|
310 |
+
|
311 |
+
rrule(freq)
|
312 |
+
|
313 |
+
Where freq must be one of YEARLY, MONTHLY, WEEKLY, DAILY, HOURLY, MINUTELY,
|
314 |
+
or SECONDLY.
|
315 |
+
|
316 |
+
.. note::
|
317 |
+
Per RFC section 3.3.10, recurrence instances falling on invalid dates
|
318 |
+
and times are ignored rather than coerced:
|
319 |
+
|
320 |
+
Recurrence rules may generate recurrence instances with an invalid
|
321 |
+
date (e.g., February 30) or nonexistent local time (e.g., 1:30 AM
|
322 |
+
on a day where the local time is moved forward by an hour at 1:00
|
323 |
+
AM). Such recurrence instances MUST be ignored and MUST NOT be
|
324 |
+
counted as part of the recurrence set.
|
325 |
+
|
326 |
+
This can lead to possibly surprising behavior when, for example, the
|
327 |
+
start date occurs at the end of the month:
|
328 |
+
|
329 |
+
>>> from dateutil.rrule import rrule, MONTHLY
|
330 |
+
>>> from datetime import datetime
|
331 |
+
>>> start_date = datetime(2014, 12, 31)
|
332 |
+
>>> list(rrule(freq=MONTHLY, count=4, dtstart=start_date))
|
333 |
+
... # doctest: +NORMALIZE_WHITESPACE
|
334 |
+
[datetime.datetime(2014, 12, 31, 0, 0),
|
335 |
+
datetime.datetime(2015, 1, 31, 0, 0),
|
336 |
+
datetime.datetime(2015, 3, 31, 0, 0),
|
337 |
+
datetime.datetime(2015, 5, 31, 0, 0)]
|
338 |
+
|
339 |
+
Additionally, it supports the following keyword arguments:
|
340 |
+
|
341 |
+
:param dtstart:
|
342 |
+
The recurrence start. Besides being the base for the recurrence,
|
343 |
+
missing parameters in the final recurrence instances will also be
|
344 |
+
extracted from this date. If not given, datetime.now() will be used
|
345 |
+
instead.
|
346 |
+
:param interval:
|
347 |
+
The interval between each freq iteration. For example, when using
|
348 |
+
YEARLY, an interval of 2 means once every two years, but with HOURLY,
|
349 |
+
it means once every two hours. The default interval is 1.
|
350 |
+
:param wkst:
|
351 |
+
The week start day. Must be one of the MO, TU, WE constants, or an
|
352 |
+
integer, specifying the first day of the week. This will affect
|
353 |
+
recurrences based on weekly periods. The default week start is got
|
354 |
+
from calendar.firstweekday(), and may be modified by
|
355 |
+
calendar.setfirstweekday().
|
356 |
+
:param count:
|
357 |
+
If given, this determines how many occurrences will be generated.
|
358 |
+
|
359 |
+
.. note::
|
360 |
+
As of version 2.5.0, the use of the keyword ``until`` in conjunction
|
361 |
+
with ``count`` is deprecated, to make sure ``dateutil`` is fully
|
362 |
+
compliant with `RFC-5545 Sec. 3.3.10 <https://tools.ietf.org/
|
363 |
+
html/rfc5545#section-3.3.10>`_. Therefore, ``until`` and ``count``
|
364 |
+
**must not** occur in the same call to ``rrule``.
|
365 |
+
:param until:
|
366 |
+
If given, this must be a datetime instance specifying the upper-bound
|
367 |
+
limit of the recurrence. The last recurrence in the rule is the greatest
|
368 |
+
datetime that is less than or equal to the value specified in the
|
369 |
+
``until`` parameter.
|
370 |
+
|
371 |
+
.. note::
|
372 |
+
As of version 2.5.0, the use of the keyword ``until`` in conjunction
|
373 |
+
with ``count`` is deprecated, to make sure ``dateutil`` is fully
|
374 |
+
compliant with `RFC-5545 Sec. 3.3.10 <https://tools.ietf.org/
|
375 |
+
html/rfc5545#section-3.3.10>`_. Therefore, ``until`` and ``count``
|
376 |
+
**must not** occur in the same call to ``rrule``.
|
377 |
+
:param bysetpos:
|
378 |
+
If given, it must be either an integer, or a sequence of integers,
|
379 |
+
positive or negative. Each given integer will specify an occurrence
|
380 |
+
number, corresponding to the nth occurrence of the rule inside the
|
381 |
+
frequency period. For example, a bysetpos of -1 if combined with a
|
382 |
+
MONTHLY frequency, and a byweekday of (MO, TU, WE, TH, FR), will
|
383 |
+
result in the last work day of every month.
|
384 |
+
:param bymonth:
|
385 |
+
If given, it must be either an integer, or a sequence of integers,
|
386 |
+
meaning the months to apply the recurrence to.
|
387 |
+
:param bymonthday:
|
388 |
+
If given, it must be either an integer, or a sequence of integers,
|
389 |
+
meaning the month days to apply the recurrence to.
|
390 |
+
:param byyearday:
|
391 |
+
If given, it must be either an integer, or a sequence of integers,
|
392 |
+
meaning the year days to apply the recurrence to.
|
393 |
+
:param byeaster:
|
394 |
+
If given, it must be either an integer, or a sequence of integers,
|
395 |
+
positive or negative. Each integer will define an offset from the
|
396 |
+
Easter Sunday. Passing the offset 0 to byeaster will yield the Easter
|
397 |
+
Sunday itself. This is an extension to the RFC specification.
|
398 |
+
:param byweekno:
|
399 |
+
If given, it must be either an integer, or a sequence of integers,
|
400 |
+
meaning the week numbers to apply the recurrence to. Week numbers
|
401 |
+
have the meaning described in ISO8601, that is, the first week of
|
402 |
+
the year is that containing at least four days of the new year.
|
403 |
+
:param byweekday:
|
404 |
+
If given, it must be either an integer (0 == MO), a sequence of
|
405 |
+
integers, one of the weekday constants (MO, TU, etc), or a sequence
|
406 |
+
of these constants. When given, these variables will define the
|
407 |
+
weekdays where the recurrence will be applied. It's also possible to
|
408 |
+
use an argument n for the weekday instances, which will mean the nth
|
409 |
+
occurrence of this weekday in the period. For example, with MONTHLY,
|
410 |
+
or with YEARLY and BYMONTH, using FR(+1) in byweekday will specify the
|
411 |
+
first friday of the month where the recurrence happens. Notice that in
|
412 |
+
the RFC documentation, this is specified as BYDAY, but was renamed to
|
413 |
+
avoid the ambiguity of that keyword.
|
414 |
+
:param byhour:
|
415 |
+
If given, it must be either an integer, or a sequence of integers,
|
416 |
+
meaning the hours to apply the recurrence to.
|
417 |
+
:param byminute:
|
418 |
+
If given, it must be either an integer, or a sequence of integers,
|
419 |
+
meaning the minutes to apply the recurrence to.
|
420 |
+
:param bysecond:
|
421 |
+
If given, it must be either an integer, or a sequence of integers,
|
422 |
+
meaning the seconds to apply the recurrence to.
|
423 |
+
:param cache:
|
424 |
+
If given, it must be a boolean value specifying to enable or disable
|
425 |
+
caching of results. If you will use the same rrule instance multiple
|
426 |
+
times, enabling caching will improve the performance considerably.
|
427 |
+
"""
|
428 |
+
def __init__(self, freq, dtstart=None,
|
429 |
+
interval=1, wkst=None, count=None, until=None, bysetpos=None,
|
430 |
+
bymonth=None, bymonthday=None, byyearday=None, byeaster=None,
|
431 |
+
byweekno=None, byweekday=None,
|
432 |
+
byhour=None, byminute=None, bysecond=None,
|
433 |
+
cache=False):
|
434 |
+
super(rrule, self).__init__(cache)
|
435 |
+
global easter
|
436 |
+
if not dtstart:
|
437 |
+
if until and until.tzinfo:
|
438 |
+
dtstart = datetime.datetime.now(tz=until.tzinfo).replace(microsecond=0)
|
439 |
+
else:
|
440 |
+
dtstart = datetime.datetime.now().replace(microsecond=0)
|
441 |
+
elif not isinstance(dtstart, datetime.datetime):
|
442 |
+
dtstart = datetime.datetime.fromordinal(dtstart.toordinal())
|
443 |
+
else:
|
444 |
+
dtstart = dtstart.replace(microsecond=0)
|
445 |
+
self._dtstart = dtstart
|
446 |
+
self._tzinfo = dtstart.tzinfo
|
447 |
+
self._freq = freq
|
448 |
+
self._interval = interval
|
449 |
+
self._count = count
|
450 |
+
|
451 |
+
# Cache the original byxxx rules, if they are provided, as the _byxxx
|
452 |
+
# attributes do not necessarily map to the inputs, and this can be
|
453 |
+
# a problem in generating the strings. Only store things if they've
|
454 |
+
# been supplied (the string retrieval will just use .get())
|
455 |
+
self._original_rule = {}
|
456 |
+
|
457 |
+
if until and not isinstance(until, datetime.datetime):
|
458 |
+
until = datetime.datetime.fromordinal(until.toordinal())
|
459 |
+
self._until = until
|
460 |
+
|
461 |
+
if self._dtstart and self._until:
|
462 |
+
if (self._dtstart.tzinfo is not None) != (self._until.tzinfo is not None):
|
463 |
+
# According to RFC5545 Section 3.3.10:
|
464 |
+
# https://tools.ietf.org/html/rfc5545#section-3.3.10
|
465 |
+
#
|
466 |
+
# > If the "DTSTART" property is specified as a date with UTC
|
467 |
+
# > time or a date with local time and time zone reference,
|
468 |
+
# > then the UNTIL rule part MUST be specified as a date with
|
469 |
+
# > UTC time.
|
470 |
+
raise ValueError(
|
471 |
+
'RRULE UNTIL values must be specified in UTC when DTSTART '
|
472 |
+
'is timezone-aware'
|
473 |
+
)
|
474 |
+
|
475 |
+
if count is not None and until:
|
476 |
+
warn("Using both 'count' and 'until' is inconsistent with RFC 5545"
|
477 |
+
" and has been deprecated in dateutil. Future versions will "
|
478 |
+
"raise an error.", DeprecationWarning)
|
479 |
+
|
480 |
+
if wkst is None:
|
481 |
+
self._wkst = calendar.firstweekday()
|
482 |
+
elif isinstance(wkst, integer_types):
|
483 |
+
self._wkst = wkst
|
484 |
+
else:
|
485 |
+
self._wkst = wkst.weekday
|
486 |
+
|
487 |
+
if bysetpos is None:
|
488 |
+
self._bysetpos = None
|
489 |
+
elif isinstance(bysetpos, integer_types):
|
490 |
+
if bysetpos == 0 or not (-366 <= bysetpos <= 366):
|
491 |
+
raise ValueError("bysetpos must be between 1 and 366, "
|
492 |
+
"or between -366 and -1")
|
493 |
+
self._bysetpos = (bysetpos,)
|
494 |
+
else:
|
495 |
+
self._bysetpos = tuple(bysetpos)
|
496 |
+
for pos in self._bysetpos:
|
497 |
+
if pos == 0 or not (-366 <= pos <= 366):
|
498 |
+
raise ValueError("bysetpos must be between 1 and 366, "
|
499 |
+
"or between -366 and -1")
|
500 |
+
|
501 |
+
if self._bysetpos:
|
502 |
+
self._original_rule['bysetpos'] = self._bysetpos
|
503 |
+
|
504 |
+
if (byweekno is None and byyearday is None and bymonthday is None and
|
505 |
+
byweekday is None and byeaster is None):
|
506 |
+
if freq == YEARLY:
|
507 |
+
if bymonth is None:
|
508 |
+
bymonth = dtstart.month
|
509 |
+
self._original_rule['bymonth'] = None
|
510 |
+
bymonthday = dtstart.day
|
511 |
+
self._original_rule['bymonthday'] = None
|
512 |
+
elif freq == MONTHLY:
|
513 |
+
bymonthday = dtstart.day
|
514 |
+
self._original_rule['bymonthday'] = None
|
515 |
+
elif freq == WEEKLY:
|
516 |
+
byweekday = dtstart.weekday()
|
517 |
+
self._original_rule['byweekday'] = None
|
518 |
+
|
519 |
+
# bymonth
|
520 |
+
if bymonth is None:
|
521 |
+
self._bymonth = None
|
522 |
+
else:
|
523 |
+
if isinstance(bymonth, integer_types):
|
524 |
+
bymonth = (bymonth,)
|
525 |
+
|
526 |
+
self._bymonth = tuple(sorted(set(bymonth)))
|
527 |
+
|
528 |
+
if 'bymonth' not in self._original_rule:
|
529 |
+
self._original_rule['bymonth'] = self._bymonth
|
530 |
+
|
531 |
+
# byyearday
|
532 |
+
if byyearday is None:
|
533 |
+
self._byyearday = None
|
534 |
+
else:
|
535 |
+
if isinstance(byyearday, integer_types):
|
536 |
+
byyearday = (byyearday,)
|
537 |
+
|
538 |
+
self._byyearday = tuple(sorted(set(byyearday)))
|
539 |
+
self._original_rule['byyearday'] = self._byyearday
|
540 |
+
|
541 |
+
# byeaster
|
542 |
+
if byeaster is not None:
|
543 |
+
if not easter:
|
544 |
+
from dateutil import easter
|
545 |
+
if isinstance(byeaster, integer_types):
|
546 |
+
self._byeaster = (byeaster,)
|
547 |
+
else:
|
548 |
+
self._byeaster = tuple(sorted(byeaster))
|
549 |
+
|
550 |
+
self._original_rule['byeaster'] = self._byeaster
|
551 |
+
else:
|
552 |
+
self._byeaster = None
|
553 |
+
|
554 |
+
# bymonthday
|
555 |
+
if bymonthday is None:
|
556 |
+
self._bymonthday = ()
|
557 |
+
self._bynmonthday = ()
|
558 |
+
else:
|
559 |
+
if isinstance(bymonthday, integer_types):
|
560 |
+
bymonthday = (bymonthday,)
|
561 |
+
|
562 |
+
bymonthday = set(bymonthday) # Ensure it's unique
|
563 |
+
|
564 |
+
self._bymonthday = tuple(sorted(x for x in bymonthday if x > 0))
|
565 |
+
self._bynmonthday = tuple(sorted(x for x in bymonthday if x < 0))
|
566 |
+
|
567 |
+
# Storing positive numbers first, then negative numbers
|
568 |
+
if 'bymonthday' not in self._original_rule:
|
569 |
+
self._original_rule['bymonthday'] = tuple(
|
570 |
+
itertools.chain(self._bymonthday, self._bynmonthday))
|
571 |
+
|
572 |
+
# byweekno
|
573 |
+
if byweekno is None:
|
574 |
+
self._byweekno = None
|
575 |
+
else:
|
576 |
+
if isinstance(byweekno, integer_types):
|
577 |
+
byweekno = (byweekno,)
|
578 |
+
|
579 |
+
self._byweekno = tuple(sorted(set(byweekno)))
|
580 |
+
|
581 |
+
self._original_rule['byweekno'] = self._byweekno
|
582 |
+
|
583 |
+
# byweekday / bynweekday
|
584 |
+
if byweekday is None:
|
585 |
+
self._byweekday = None
|
586 |
+
self._bynweekday = None
|
587 |
+
else:
|
588 |
+
# If it's one of the valid non-sequence types, convert to a
|
589 |
+
# single-element sequence before the iterator that builds the
|
590 |
+
# byweekday set.
|
591 |
+
if isinstance(byweekday, integer_types) or hasattr(byweekday, "n"):
|
592 |
+
byweekday = (byweekday,)
|
593 |
+
|
594 |
+
self._byweekday = set()
|
595 |
+
self._bynweekday = set()
|
596 |
+
for wday in byweekday:
|
597 |
+
if isinstance(wday, integer_types):
|
598 |
+
self._byweekday.add(wday)
|
599 |
+
elif not wday.n or freq > MONTHLY:
|
600 |
+
self._byweekday.add(wday.weekday)
|
601 |
+
else:
|
602 |
+
self._bynweekday.add((wday.weekday, wday.n))
|
603 |
+
|
604 |
+
if not self._byweekday:
|
605 |
+
self._byweekday = None
|
606 |
+
elif not self._bynweekday:
|
607 |
+
self._bynweekday = None
|
608 |
+
|
609 |
+
if self._byweekday is not None:
|
610 |
+
self._byweekday = tuple(sorted(self._byweekday))
|
611 |
+
orig_byweekday = [weekday(x) for x in self._byweekday]
|
612 |
+
else:
|
613 |
+
orig_byweekday = ()
|
614 |
+
|
615 |
+
if self._bynweekday is not None:
|
616 |
+
self._bynweekday = tuple(sorted(self._bynweekday))
|
617 |
+
orig_bynweekday = [weekday(*x) for x in self._bynweekday]
|
618 |
+
else:
|
619 |
+
orig_bynweekday = ()
|
620 |
+
|
621 |
+
if 'byweekday' not in self._original_rule:
|
622 |
+
self._original_rule['byweekday'] = tuple(itertools.chain(
|
623 |
+
orig_byweekday, orig_bynweekday))
|
624 |
+
|
625 |
+
# byhour
|
626 |
+
if byhour is None:
|
627 |
+
if freq < HOURLY:
|
628 |
+
self._byhour = {dtstart.hour}
|
629 |
+
else:
|
630 |
+
self._byhour = None
|
631 |
+
else:
|
632 |
+
if isinstance(byhour, integer_types):
|
633 |
+
byhour = (byhour,)
|
634 |
+
|
635 |
+
if freq == HOURLY:
|
636 |
+
self._byhour = self.__construct_byset(start=dtstart.hour,
|
637 |
+
byxxx=byhour,
|
638 |
+
base=24)
|
639 |
+
else:
|
640 |
+
self._byhour = set(byhour)
|
641 |
+
|
642 |
+
self._byhour = tuple(sorted(self._byhour))
|
643 |
+
self._original_rule['byhour'] = self._byhour
|
644 |
+
|
645 |
+
# byminute
|
646 |
+
if byminute is None:
|
647 |
+
if freq < MINUTELY:
|
648 |
+
self._byminute = {dtstart.minute}
|
649 |
+
else:
|
650 |
+
self._byminute = None
|
651 |
+
else:
|
652 |
+
if isinstance(byminute, integer_types):
|
653 |
+
byminute = (byminute,)
|
654 |
+
|
655 |
+
if freq == MINUTELY:
|
656 |
+
self._byminute = self.__construct_byset(start=dtstart.minute,
|
657 |
+
byxxx=byminute,
|
658 |
+
base=60)
|
659 |
+
else:
|
660 |
+
self._byminute = set(byminute)
|
661 |
+
|
662 |
+
self._byminute = tuple(sorted(self._byminute))
|
663 |
+
self._original_rule['byminute'] = self._byminute
|
664 |
+
|
665 |
+
# bysecond
|
666 |
+
if bysecond is None:
|
667 |
+
if freq < SECONDLY:
|
668 |
+
self._bysecond = ((dtstart.second,))
|
669 |
+
else:
|
670 |
+
self._bysecond = None
|
671 |
+
else:
|
672 |
+
if isinstance(bysecond, integer_types):
|
673 |
+
bysecond = (bysecond,)
|
674 |
+
|
675 |
+
self._bysecond = set(bysecond)
|
676 |
+
|
677 |
+
if freq == SECONDLY:
|
678 |
+
self._bysecond = self.__construct_byset(start=dtstart.second,
|
679 |
+
byxxx=bysecond,
|
680 |
+
base=60)
|
681 |
+
else:
|
682 |
+
self._bysecond = set(bysecond)
|
683 |
+
|
684 |
+
self._bysecond = tuple(sorted(self._bysecond))
|
685 |
+
self._original_rule['bysecond'] = self._bysecond
|
686 |
+
|
687 |
+
if self._freq >= HOURLY:
|
688 |
+
self._timeset = None
|
689 |
+
else:
|
690 |
+
self._timeset = []
|
691 |
+
for hour in self._byhour:
|
692 |
+
for minute in self._byminute:
|
693 |
+
for second in self._bysecond:
|
694 |
+
self._timeset.append(
|
695 |
+
datetime.time(hour, minute, second,
|
696 |
+
tzinfo=self._tzinfo))
|
697 |
+
self._timeset.sort()
|
698 |
+
self._timeset = tuple(self._timeset)
|
699 |
+
|
700 |
+
def __str__(self):
|
701 |
+
"""
|
702 |
+
Output a string that would generate this RRULE if passed to rrulestr.
|
703 |
+
This is mostly compatible with RFC5545, except for the
|
704 |
+
dateutil-specific extension BYEASTER.
|
705 |
+
"""
|
706 |
+
|
707 |
+
output = []
|
708 |
+
h, m, s = [None] * 3
|
709 |
+
if self._dtstart:
|
710 |
+
output.append(self._dtstart.strftime('DTSTART:%Y%m%dT%H%M%S'))
|
711 |
+
h, m, s = self._dtstart.timetuple()[3:6]
|
712 |
+
|
713 |
+
parts = ['FREQ=' + FREQNAMES[self._freq]]
|
714 |
+
if self._interval != 1:
|
715 |
+
parts.append('INTERVAL=' + str(self._interval))
|
716 |
+
|
717 |
+
if self._wkst:
|
718 |
+
parts.append('WKST=' + repr(weekday(self._wkst))[0:2])
|
719 |
+
|
720 |
+
if self._count is not None:
|
721 |
+
parts.append('COUNT=' + str(self._count))
|
722 |
+
|
723 |
+
if self._until:
|
724 |
+
parts.append(self._until.strftime('UNTIL=%Y%m%dT%H%M%S'))
|
725 |
+
|
726 |
+
if self._original_rule.get('byweekday') is not None:
|
727 |
+
# The str() method on weekday objects doesn't generate
|
728 |
+
# RFC5545-compliant strings, so we should modify that.
|
729 |
+
original_rule = dict(self._original_rule)
|
730 |
+
wday_strings = []
|
731 |
+
for wday in original_rule['byweekday']:
|
732 |
+
if wday.n:
|
733 |
+
wday_strings.append('{n:+d}{wday}'.format(
|
734 |
+
n=wday.n,
|
735 |
+
wday=repr(wday)[0:2]))
|
736 |
+
else:
|
737 |
+
wday_strings.append(repr(wday))
|
738 |
+
|
739 |
+
original_rule['byweekday'] = wday_strings
|
740 |
+
else:
|
741 |
+
original_rule = self._original_rule
|
742 |
+
|
743 |
+
partfmt = '{name}={vals}'
|
744 |
+
for name, key in [('BYSETPOS', 'bysetpos'),
|
745 |
+
('BYMONTH', 'bymonth'),
|
746 |
+
('BYMONTHDAY', 'bymonthday'),
|
747 |
+
('BYYEARDAY', 'byyearday'),
|
748 |
+
('BYWEEKNO', 'byweekno'),
|
749 |
+
('BYDAY', 'byweekday'),
|
750 |
+
('BYHOUR', 'byhour'),
|
751 |
+
('BYMINUTE', 'byminute'),
|
752 |
+
('BYSECOND', 'bysecond'),
|
753 |
+
('BYEASTER', 'byeaster')]:
|
754 |
+
value = original_rule.get(key)
|
755 |
+
if value:
|
756 |
+
parts.append(partfmt.format(name=name, vals=(','.join(str(v)
|
757 |
+
for v in value))))
|
758 |
+
|
759 |
+
output.append('RRULE:' + ';'.join(parts))
|
760 |
+
return '\n'.join(output)
|
761 |
+
|
762 |
+
def replace(self, **kwargs):
|
763 |
+
"""Return new rrule with same attributes except for those attributes given new
|
764 |
+
values by whichever keyword arguments are specified."""
|
765 |
+
new_kwargs = {"interval": self._interval,
|
766 |
+
"count": self._count,
|
767 |
+
"dtstart": self._dtstart,
|
768 |
+
"freq": self._freq,
|
769 |
+
"until": self._until,
|
770 |
+
"wkst": self._wkst,
|
771 |
+
"cache": False if self._cache is None else True }
|
772 |
+
new_kwargs.update(self._original_rule)
|
773 |
+
new_kwargs.update(kwargs)
|
774 |
+
return rrule(**new_kwargs)
|
775 |
+
|
776 |
+
def _iter(self):
|
777 |
+
year, month, day, hour, minute, second, weekday, yearday, _ = \
|
778 |
+
self._dtstart.timetuple()
|
779 |
+
|
780 |
+
# Some local variables to speed things up a bit
|
781 |
+
freq = self._freq
|
782 |
+
interval = self._interval
|
783 |
+
wkst = self._wkst
|
784 |
+
until = self._until
|
785 |
+
bymonth = self._bymonth
|
786 |
+
byweekno = self._byweekno
|
787 |
+
byyearday = self._byyearday
|
788 |
+
byweekday = self._byweekday
|
789 |
+
byeaster = self._byeaster
|
790 |
+
bymonthday = self._bymonthday
|
791 |
+
bynmonthday = self._bynmonthday
|
792 |
+
bysetpos = self._bysetpos
|
793 |
+
byhour = self._byhour
|
794 |
+
byminute = self._byminute
|
795 |
+
bysecond = self._bysecond
|
796 |
+
|
797 |
+
ii = _iterinfo(self)
|
798 |
+
ii.rebuild(year, month)
|
799 |
+
|
800 |
+
getdayset = {YEARLY: ii.ydayset,
|
801 |
+
MONTHLY: ii.mdayset,
|
802 |
+
WEEKLY: ii.wdayset,
|
803 |
+
DAILY: ii.ddayset,
|
804 |
+
HOURLY: ii.ddayset,
|
805 |
+
MINUTELY: ii.ddayset,
|
806 |
+
SECONDLY: ii.ddayset}[freq]
|
807 |
+
|
808 |
+
if freq < HOURLY:
|
809 |
+
timeset = self._timeset
|
810 |
+
else:
|
811 |
+
gettimeset = {HOURLY: ii.htimeset,
|
812 |
+
MINUTELY: ii.mtimeset,
|
813 |
+
SECONDLY: ii.stimeset}[freq]
|
814 |
+
if ((freq >= HOURLY and
|
815 |
+
self._byhour and hour not in self._byhour) or
|
816 |
+
(freq >= MINUTELY and
|
817 |
+
self._byminute and minute not in self._byminute) or
|
818 |
+
(freq >= SECONDLY and
|
819 |
+
self._bysecond and second not in self._bysecond)):
|
820 |
+
timeset = ()
|
821 |
+
else:
|
822 |
+
timeset = gettimeset(hour, minute, second)
|
823 |
+
|
824 |
+
total = 0
|
825 |
+
count = self._count
|
826 |
+
while True:
|
827 |
+
# Get dayset with the right frequency
|
828 |
+
dayset, start, end = getdayset(year, month, day)
|
829 |
+
|
830 |
+
# Do the "hard" work ;-)
|
831 |
+
filtered = False
|
832 |
+
for i in dayset[start:end]:
|
833 |
+
if ((bymonth and ii.mmask[i] not in bymonth) or
|
834 |
+
(byweekno and not ii.wnomask[i]) or
|
835 |
+
(byweekday and ii.wdaymask[i] not in byweekday) or
|
836 |
+
(ii.nwdaymask and not ii.nwdaymask[i]) or
|
837 |
+
(byeaster and not ii.eastermask[i]) or
|
838 |
+
((bymonthday or bynmonthday) and
|
839 |
+
ii.mdaymask[i] not in bymonthday and
|
840 |
+
ii.nmdaymask[i] not in bynmonthday) or
|
841 |
+
(byyearday and
|
842 |
+
((i < ii.yearlen and i+1 not in byyearday and
|
843 |
+
-ii.yearlen+i not in byyearday) or
|
844 |
+
(i >= ii.yearlen and i+1-ii.yearlen not in byyearday and
|
845 |
+
-ii.nextyearlen+i-ii.yearlen not in byyearday)))):
|
846 |
+
dayset[i] = None
|
847 |
+
filtered = True
|
848 |
+
|
849 |
+
# Output results
|
850 |
+
if bysetpos and timeset:
|
851 |
+
poslist = []
|
852 |
+
for pos in bysetpos:
|
853 |
+
if pos < 0:
|
854 |
+
daypos, timepos = divmod(pos, len(timeset))
|
855 |
+
else:
|
856 |
+
daypos, timepos = divmod(pos-1, len(timeset))
|
857 |
+
try:
|
858 |
+
i = [x for x in dayset[start:end]
|
859 |
+
if x is not None][daypos]
|
860 |
+
time = timeset[timepos]
|
861 |
+
except IndexError:
|
862 |
+
pass
|
863 |
+
else:
|
864 |
+
date = datetime.date.fromordinal(ii.yearordinal+i)
|
865 |
+
res = datetime.datetime.combine(date, time)
|
866 |
+
if res not in poslist:
|
867 |
+
poslist.append(res)
|
868 |
+
poslist.sort()
|
869 |
+
for res in poslist:
|
870 |
+
if until and res > until:
|
871 |
+
self._len = total
|
872 |
+
return
|
873 |
+
elif res >= self._dtstart:
|
874 |
+
if count is not None:
|
875 |
+
count -= 1
|
876 |
+
if count < 0:
|
877 |
+
self._len = total
|
878 |
+
return
|
879 |
+
total += 1
|
880 |
+
yield res
|
881 |
+
else:
|
882 |
+
for i in dayset[start:end]:
|
883 |
+
if i is not None:
|
884 |
+
date = datetime.date.fromordinal(ii.yearordinal + i)
|
885 |
+
for time in timeset:
|
886 |
+
res = datetime.datetime.combine(date, time)
|
887 |
+
if until and res > until:
|
888 |
+
self._len = total
|
889 |
+
return
|
890 |
+
elif res >= self._dtstart:
|
891 |
+
if count is not None:
|
892 |
+
count -= 1
|
893 |
+
if count < 0:
|
894 |
+
self._len = total
|
895 |
+
return
|
896 |
+
|
897 |
+
total += 1
|
898 |
+
yield res
|
899 |
+
|
900 |
+
# Handle frequency and interval
|
901 |
+
fixday = False
|
902 |
+
if freq == YEARLY:
|
903 |
+
year += interval
|
904 |
+
if year > datetime.MAXYEAR:
|
905 |
+
self._len = total
|
906 |
+
return
|
907 |
+
ii.rebuild(year, month)
|
908 |
+
elif freq == MONTHLY:
|
909 |
+
month += interval
|
910 |
+
if month > 12:
|
911 |
+
div, mod = divmod(month, 12)
|
912 |
+
month = mod
|
913 |
+
year += div
|
914 |
+
if month == 0:
|
915 |
+
month = 12
|
916 |
+
year -= 1
|
917 |
+
if year > datetime.MAXYEAR:
|
918 |
+
self._len = total
|
919 |
+
return
|
920 |
+
ii.rebuild(year, month)
|
921 |
+
elif freq == WEEKLY:
|
922 |
+
if wkst > weekday:
|
923 |
+
day += -(weekday+1+(6-wkst))+self._interval*7
|
924 |
+
else:
|
925 |
+
day += -(weekday-wkst)+self._interval*7
|
926 |
+
weekday = wkst
|
927 |
+
fixday = True
|
928 |
+
elif freq == DAILY:
|
929 |
+
day += interval
|
930 |
+
fixday = True
|
931 |
+
elif freq == HOURLY:
|
932 |
+
if filtered:
|
933 |
+
# Jump to one iteration before next day
|
934 |
+
hour += ((23-hour)//interval)*interval
|
935 |
+
|
936 |
+
if byhour:
|
937 |
+
ndays, hour = self.__mod_distance(value=hour,
|
938 |
+
byxxx=self._byhour,
|
939 |
+
base=24)
|
940 |
+
else:
|
941 |
+
ndays, hour = divmod(hour+interval, 24)
|
942 |
+
|
943 |
+
if ndays:
|
944 |
+
day += ndays
|
945 |
+
fixday = True
|
946 |
+
|
947 |
+
timeset = gettimeset(hour, minute, second)
|
948 |
+
elif freq == MINUTELY:
|
949 |
+
if filtered:
|
950 |
+
# Jump to one iteration before next day
|
951 |
+
minute += ((1439-(hour*60+minute))//interval)*interval
|
952 |
+
|
953 |
+
valid = False
|
954 |
+
rep_rate = (24*60)
|
955 |
+
for j in range(rep_rate // gcd(interval, rep_rate)):
|
956 |
+
if byminute:
|
957 |
+
nhours, minute = \
|
958 |
+
self.__mod_distance(value=minute,
|
959 |
+
byxxx=self._byminute,
|
960 |
+
base=60)
|
961 |
+
else:
|
962 |
+
nhours, minute = divmod(minute+interval, 60)
|
963 |
+
|
964 |
+
div, hour = divmod(hour+nhours, 24)
|
965 |
+
if div:
|
966 |
+
day += div
|
967 |
+
fixday = True
|
968 |
+
filtered = False
|
969 |
+
|
970 |
+
if not byhour or hour in byhour:
|
971 |
+
valid = True
|
972 |
+
break
|
973 |
+
|
974 |
+
if not valid:
|
975 |
+
raise ValueError('Invalid combination of interval and ' +
|
976 |
+
'byhour resulting in empty rule.')
|
977 |
+
|
978 |
+
timeset = gettimeset(hour, minute, second)
|
979 |
+
elif freq == SECONDLY:
|
980 |
+
if filtered:
|
981 |
+
# Jump to one iteration before next day
|
982 |
+
second += (((86399 - (hour * 3600 + minute * 60 + second))
|
983 |
+
// interval) * interval)
|
984 |
+
|
985 |
+
rep_rate = (24 * 3600)
|
986 |
+
valid = False
|
987 |
+
for j in range(0, rep_rate // gcd(interval, rep_rate)):
|
988 |
+
if bysecond:
|
989 |
+
nminutes, second = \
|
990 |
+
self.__mod_distance(value=second,
|
991 |
+
byxxx=self._bysecond,
|
992 |
+
base=60)
|
993 |
+
else:
|
994 |
+
nminutes, second = divmod(second+interval, 60)
|
995 |
+
|
996 |
+
div, minute = divmod(minute+nminutes, 60)
|
997 |
+
if div:
|
998 |
+
hour += div
|
999 |
+
div, hour = divmod(hour, 24)
|
1000 |
+
if div:
|
1001 |
+
day += div
|
1002 |
+
fixday = True
|
1003 |
+
|
1004 |
+
if ((not byhour or hour in byhour) and
|
1005 |
+
(not byminute or minute in byminute) and
|
1006 |
+
(not bysecond or second in bysecond)):
|
1007 |
+
valid = True
|
1008 |
+
break
|
1009 |
+
|
1010 |
+
if not valid:
|
1011 |
+
raise ValueError('Invalid combination of interval, ' +
|
1012 |
+
'byhour and byminute resulting in empty' +
|
1013 |
+
' rule.')
|
1014 |
+
|
1015 |
+
timeset = gettimeset(hour, minute, second)
|
1016 |
+
|
1017 |
+
if fixday and day > 28:
|
1018 |
+
daysinmonth = calendar.monthrange(year, month)[1]
|
1019 |
+
if day > daysinmonth:
|
1020 |
+
while day > daysinmonth:
|
1021 |
+
day -= daysinmonth
|
1022 |
+
month += 1
|
1023 |
+
if month == 13:
|
1024 |
+
month = 1
|
1025 |
+
year += 1
|
1026 |
+
if year > datetime.MAXYEAR:
|
1027 |
+
self._len = total
|
1028 |
+
return
|
1029 |
+
daysinmonth = calendar.monthrange(year, month)[1]
|
1030 |
+
ii.rebuild(year, month)
|
1031 |
+
|
1032 |
+
def __construct_byset(self, start, byxxx, base):
|
1033 |
+
"""
|
1034 |
+
If a `BYXXX` sequence is passed to the constructor at the same level as
|
1035 |
+
`FREQ` (e.g. `FREQ=HOURLY,BYHOUR={2,4,7},INTERVAL=3`), there are some
|
1036 |
+
specifications which cannot be reached given some starting conditions.
|
1037 |
+
|
1038 |
+
This occurs whenever the interval is not coprime with the base of a
|
1039 |
+
given unit and the difference between the starting position and the
|
1040 |
+
ending position is not coprime with the greatest common denominator
|
1041 |
+
between the interval and the base. For example, with a FREQ of hourly
|
1042 |
+
starting at 17:00 and an interval of 4, the only valid values for
|
1043 |
+
BYHOUR would be {21, 1, 5, 9, 13, 17}, because 4 and 24 are not
|
1044 |
+
coprime.
|
1045 |
+
|
1046 |
+
:param start:
|
1047 |
+
Specifies the starting position.
|
1048 |
+
:param byxxx:
|
1049 |
+
An iterable containing the list of allowed values.
|
1050 |
+
:param base:
|
1051 |
+
The largest allowable value for the specified frequency (e.g.
|
1052 |
+
24 hours, 60 minutes).
|
1053 |
+
|
1054 |
+
This does not preserve the type of the iterable, returning a set, since
|
1055 |
+
the values should be unique and the order is irrelevant, this will
|
1056 |
+
speed up later lookups.
|
1057 |
+
|
1058 |
+
In the event of an empty set, raises a :exception:`ValueError`, as this
|
1059 |
+
results in an empty rrule.
|
1060 |
+
"""
|
1061 |
+
|
1062 |
+
cset = set()
|
1063 |
+
|
1064 |
+
# Support a single byxxx value.
|
1065 |
+
if isinstance(byxxx, integer_types):
|
1066 |
+
byxxx = (byxxx, )
|
1067 |
+
|
1068 |
+
for num in byxxx:
|
1069 |
+
i_gcd = gcd(self._interval, base)
|
1070 |
+
# Use divmod rather than % because we need to wrap negative nums.
|
1071 |
+
if i_gcd == 1 or divmod(num - start, i_gcd)[1] == 0:
|
1072 |
+
cset.add(num)
|
1073 |
+
|
1074 |
+
if len(cset) == 0:
|
1075 |
+
raise ValueError("Invalid rrule byxxx generates an empty set.")
|
1076 |
+
|
1077 |
+
return cset
|
1078 |
+
|
1079 |
+
def __mod_distance(self, value, byxxx, base):
|
1080 |
+
"""
|
1081 |
+
Calculates the next value in a sequence where the `FREQ` parameter is
|
1082 |
+
specified along with a `BYXXX` parameter at the same "level"
|
1083 |
+
(e.g. `HOURLY` specified with `BYHOUR`).
|
1084 |
+
|
1085 |
+
:param value:
|
1086 |
+
The old value of the component.
|
1087 |
+
:param byxxx:
|
1088 |
+
The `BYXXX` set, which should have been generated by
|
1089 |
+
`rrule._construct_byset`, or something else which checks that a
|
1090 |
+
valid rule is present.
|
1091 |
+
:param base:
|
1092 |
+
The largest allowable value for the specified frequency (e.g.
|
1093 |
+
24 hours, 60 minutes).
|
1094 |
+
|
1095 |
+
If a valid value is not found after `base` iterations (the maximum
|
1096 |
+
number before the sequence would start to repeat), this raises a
|
1097 |
+
:exception:`ValueError`, as no valid values were found.
|
1098 |
+
|
1099 |
+
This returns a tuple of `divmod(n*interval, base)`, where `n` is the
|
1100 |
+
smallest number of `interval` repetitions until the next specified
|
1101 |
+
value in `byxxx` is found.
|
1102 |
+
"""
|
1103 |
+
accumulator = 0
|
1104 |
+
for ii in range(1, base + 1):
|
1105 |
+
# Using divmod() over % to account for negative intervals
|
1106 |
+
div, value = divmod(value + self._interval, base)
|
1107 |
+
accumulator += div
|
1108 |
+
if value in byxxx:
|
1109 |
+
return (accumulator, value)
|
1110 |
+
|
1111 |
+
|
1112 |
+
class _iterinfo(object):
|
1113 |
+
__slots__ = ["rrule", "lastyear", "lastmonth",
|
1114 |
+
"yearlen", "nextyearlen", "yearordinal", "yearweekday",
|
1115 |
+
"mmask", "mrange", "mdaymask", "nmdaymask",
|
1116 |
+
"wdaymask", "wnomask", "nwdaymask", "eastermask"]
|
1117 |
+
|
1118 |
+
def __init__(self, rrule):
|
1119 |
+
for attr in self.__slots__:
|
1120 |
+
setattr(self, attr, None)
|
1121 |
+
self.rrule = rrule
|
1122 |
+
|
1123 |
+
def rebuild(self, year, month):
|
1124 |
+
# Every mask is 7 days longer to handle cross-year weekly periods.
|
1125 |
+
rr = self.rrule
|
1126 |
+
if year != self.lastyear:
|
1127 |
+
self.yearlen = 365 + calendar.isleap(year)
|
1128 |
+
self.nextyearlen = 365 + calendar.isleap(year + 1)
|
1129 |
+
firstyday = datetime.date(year, 1, 1)
|
1130 |
+
self.yearordinal = firstyday.toordinal()
|
1131 |
+
self.yearweekday = firstyday.weekday()
|
1132 |
+
|
1133 |
+
wday = datetime.date(year, 1, 1).weekday()
|
1134 |
+
if self.yearlen == 365:
|
1135 |
+
self.mmask = M365MASK
|
1136 |
+
self.mdaymask = MDAY365MASK
|
1137 |
+
self.nmdaymask = NMDAY365MASK
|
1138 |
+
self.wdaymask = WDAYMASK[wday:]
|
1139 |
+
self.mrange = M365RANGE
|
1140 |
+
else:
|
1141 |
+
self.mmask = M366MASK
|
1142 |
+
self.mdaymask = MDAY366MASK
|
1143 |
+
self.nmdaymask = NMDAY366MASK
|
1144 |
+
self.wdaymask = WDAYMASK[wday:]
|
1145 |
+
self.mrange = M366RANGE
|
1146 |
+
|
1147 |
+
if not rr._byweekno:
|
1148 |
+
self.wnomask = None
|
1149 |
+
else:
|
1150 |
+
self.wnomask = [0]*(self.yearlen+7)
|
1151 |
+
# no1wkst = firstwkst = self.wdaymask.index(rr._wkst)
|
1152 |
+
no1wkst = firstwkst = (7-self.yearweekday+rr._wkst) % 7
|
1153 |
+
if no1wkst >= 4:
|
1154 |
+
no1wkst = 0
|
1155 |
+
# Number of days in the year, plus the days we got
|
1156 |
+
# from last year.
|
1157 |
+
wyearlen = self.yearlen+(self.yearweekday-rr._wkst) % 7
|
1158 |
+
else:
|
1159 |
+
# Number of days in the year, minus the days we
|
1160 |
+
# left in last year.
|
1161 |
+
wyearlen = self.yearlen-no1wkst
|
1162 |
+
div, mod = divmod(wyearlen, 7)
|
1163 |
+
numweeks = div+mod//4
|
1164 |
+
for n in rr._byweekno:
|
1165 |
+
if n < 0:
|
1166 |
+
n += numweeks+1
|
1167 |
+
if not (0 < n <= numweeks):
|
1168 |
+
continue
|
1169 |
+
if n > 1:
|
1170 |
+
i = no1wkst+(n-1)*7
|
1171 |
+
if no1wkst != firstwkst:
|
1172 |
+
i -= 7-firstwkst
|
1173 |
+
else:
|
1174 |
+
i = no1wkst
|
1175 |
+
for j in range(7):
|
1176 |
+
self.wnomask[i] = 1
|
1177 |
+
i += 1
|
1178 |
+
if self.wdaymask[i] == rr._wkst:
|
1179 |
+
break
|
1180 |
+
if 1 in rr._byweekno:
|
1181 |
+
# Check week number 1 of next year as well
|
1182 |
+
# TODO: Check -numweeks for next year.
|
1183 |
+
i = no1wkst+numweeks*7
|
1184 |
+
if no1wkst != firstwkst:
|
1185 |
+
i -= 7-firstwkst
|
1186 |
+
if i < self.yearlen:
|
1187 |
+
# If week starts in next year, we
|
1188 |
+
# don't care about it.
|
1189 |
+
for j in range(7):
|
1190 |
+
self.wnomask[i] = 1
|
1191 |
+
i += 1
|
1192 |
+
if self.wdaymask[i] == rr._wkst:
|
1193 |
+
break
|
1194 |
+
if no1wkst:
|
1195 |
+
# Check last week number of last year as
|
1196 |
+
# well. If no1wkst is 0, either the year
|
1197 |
+
# started on week start, or week number 1
|
1198 |
+
# got days from last year, so there are no
|
1199 |
+
# days from last year's last week number in
|
1200 |
+
# this year.
|
1201 |
+
if -1 not in rr._byweekno:
|
1202 |
+
lyearweekday = datetime.date(year-1, 1, 1).weekday()
|
1203 |
+
lno1wkst = (7-lyearweekday+rr._wkst) % 7
|
1204 |
+
lyearlen = 365+calendar.isleap(year-1)
|
1205 |
+
if lno1wkst >= 4:
|
1206 |
+
lno1wkst = 0
|
1207 |
+
lnumweeks = 52+(lyearlen +
|
1208 |
+
(lyearweekday-rr._wkst) % 7) % 7//4
|
1209 |
+
else:
|
1210 |
+
lnumweeks = 52+(self.yearlen-no1wkst) % 7//4
|
1211 |
+
else:
|
1212 |
+
lnumweeks = -1
|
1213 |
+
if lnumweeks in rr._byweekno:
|
1214 |
+
for i in range(no1wkst):
|
1215 |
+
self.wnomask[i] = 1
|
1216 |
+
|
1217 |
+
if (rr._bynweekday and (month != self.lastmonth or
|
1218 |
+
year != self.lastyear)):
|
1219 |
+
ranges = []
|
1220 |
+
if rr._freq == YEARLY:
|
1221 |
+
if rr._bymonth:
|
1222 |
+
for month in rr._bymonth:
|
1223 |
+
ranges.append(self.mrange[month-1:month+1])
|
1224 |
+
else:
|
1225 |
+
ranges = [(0, self.yearlen)]
|
1226 |
+
elif rr._freq == MONTHLY:
|
1227 |
+
ranges = [self.mrange[month-1:month+1]]
|
1228 |
+
if ranges:
|
1229 |
+
# Weekly frequency won't get here, so we may not
|
1230 |
+
# care about cross-year weekly periods.
|
1231 |
+
self.nwdaymask = [0]*self.yearlen
|
1232 |
+
for first, last in ranges:
|
1233 |
+
last -= 1
|
1234 |
+
for wday, n in rr._bynweekday:
|
1235 |
+
if n < 0:
|
1236 |
+
i = last+(n+1)*7
|
1237 |
+
i -= (self.wdaymask[i]-wday) % 7
|
1238 |
+
else:
|
1239 |
+
i = first+(n-1)*7
|
1240 |
+
i += (7-self.wdaymask[i]+wday) % 7
|
1241 |
+
if first <= i <= last:
|
1242 |
+
self.nwdaymask[i] = 1
|
1243 |
+
|
1244 |
+
if rr._byeaster:
|
1245 |
+
self.eastermask = [0]*(self.yearlen+7)
|
1246 |
+
eyday = easter.easter(year).toordinal()-self.yearordinal
|
1247 |
+
for offset in rr._byeaster:
|
1248 |
+
self.eastermask[eyday+offset] = 1
|
1249 |
+
|
1250 |
+
self.lastyear = year
|
1251 |
+
self.lastmonth = month
|
1252 |
+
|
1253 |
+
def ydayset(self, year, month, day):
|
1254 |
+
return list(range(self.yearlen)), 0, self.yearlen
|
1255 |
+
|
1256 |
+
def mdayset(self, year, month, day):
|
1257 |
+
dset = [None]*self.yearlen
|
1258 |
+
start, end = self.mrange[month-1:month+1]
|
1259 |
+
for i in range(start, end):
|
1260 |
+
dset[i] = i
|
1261 |
+
return dset, start, end
|
1262 |
+
|
1263 |
+
def wdayset(self, year, month, day):
|
1264 |
+
# We need to handle cross-year weeks here.
|
1265 |
+
dset = [None]*(self.yearlen+7)
|
1266 |
+
i = datetime.date(year, month, day).toordinal()-self.yearordinal
|
1267 |
+
start = i
|
1268 |
+
for j in range(7):
|
1269 |
+
dset[i] = i
|
1270 |
+
i += 1
|
1271 |
+
# if (not (0 <= i < self.yearlen) or
|
1272 |
+
# self.wdaymask[i] == self.rrule._wkst):
|
1273 |
+
# This will cross the year boundary, if necessary.
|
1274 |
+
if self.wdaymask[i] == self.rrule._wkst:
|
1275 |
+
break
|
1276 |
+
return dset, start, i
|
1277 |
+
|
1278 |
+
def ddayset(self, year, month, day):
|
1279 |
+
dset = [None] * self.yearlen
|
1280 |
+
i = datetime.date(year, month, day).toordinal() - self.yearordinal
|
1281 |
+
dset[i] = i
|
1282 |
+
return dset, i, i + 1
|
1283 |
+
|
1284 |
+
def htimeset(self, hour, minute, second):
|
1285 |
+
tset = []
|
1286 |
+
rr = self.rrule
|
1287 |
+
for minute in rr._byminute:
|
1288 |
+
for second in rr._bysecond:
|
1289 |
+
tset.append(datetime.time(hour, minute, second,
|
1290 |
+
tzinfo=rr._tzinfo))
|
1291 |
+
tset.sort()
|
1292 |
+
return tset
|
1293 |
+
|
1294 |
+
def mtimeset(self, hour, minute, second):
|
1295 |
+
tset = []
|
1296 |
+
rr = self.rrule
|
1297 |
+
for second in rr._bysecond:
|
1298 |
+
tset.append(datetime.time(hour, minute, second, tzinfo=rr._tzinfo))
|
1299 |
+
tset.sort()
|
1300 |
+
return tset
|
1301 |
+
|
1302 |
+
def stimeset(self, hour, minute, second):
|
1303 |
+
return (datetime.time(hour, minute, second,
|
1304 |
+
tzinfo=self.rrule._tzinfo),)
|
1305 |
+
|
1306 |
+
|
1307 |
+
class rruleset(rrulebase):
|
1308 |
+
""" The rruleset type allows more complex recurrence setups, mixing
|
1309 |
+
multiple rules, dates, exclusion rules, and exclusion dates. The type
|
1310 |
+
constructor takes the following keyword arguments:
|
1311 |
+
|
1312 |
+
:param cache: If True, caching of results will be enabled, improving
|
1313 |
+
performance of multiple queries considerably. """
|
1314 |
+
|
1315 |
+
class _genitem(object):
|
1316 |
+
def __init__(self, genlist, gen):
|
1317 |
+
try:
|
1318 |
+
self.dt = advance_iterator(gen)
|
1319 |
+
genlist.append(self)
|
1320 |
+
except StopIteration:
|
1321 |
+
pass
|
1322 |
+
self.genlist = genlist
|
1323 |
+
self.gen = gen
|
1324 |
+
|
1325 |
+
def __next__(self):
|
1326 |
+
try:
|
1327 |
+
self.dt = advance_iterator(self.gen)
|
1328 |
+
except StopIteration:
|
1329 |
+
if self.genlist[0] is self:
|
1330 |
+
heapq.heappop(self.genlist)
|
1331 |
+
else:
|
1332 |
+
self.genlist.remove(self)
|
1333 |
+
heapq.heapify(self.genlist)
|
1334 |
+
|
1335 |
+
next = __next__
|
1336 |
+
|
1337 |
+
def __lt__(self, other):
|
1338 |
+
return self.dt < other.dt
|
1339 |
+
|
1340 |
+
def __gt__(self, other):
|
1341 |
+
return self.dt > other.dt
|
1342 |
+
|
1343 |
+
def __eq__(self, other):
|
1344 |
+
return self.dt == other.dt
|
1345 |
+
|
1346 |
+
def __ne__(self, other):
|
1347 |
+
return self.dt != other.dt
|
1348 |
+
|
1349 |
+
def __init__(self, cache=False):
|
1350 |
+
super(rruleset, self).__init__(cache)
|
1351 |
+
self._rrule = []
|
1352 |
+
self._rdate = []
|
1353 |
+
self._exrule = []
|
1354 |
+
self._exdate = []
|
1355 |
+
|
1356 |
+
@_invalidates_cache
|
1357 |
+
def rrule(self, rrule):
|
1358 |
+
""" Include the given :py:class:`rrule` instance in the recurrence set
|
1359 |
+
generation. """
|
1360 |
+
self._rrule.append(rrule)
|
1361 |
+
|
1362 |
+
@_invalidates_cache
|
1363 |
+
def rdate(self, rdate):
|
1364 |
+
""" Include the given :py:class:`datetime` instance in the recurrence
|
1365 |
+
set generation. """
|
1366 |
+
self._rdate.append(rdate)
|
1367 |
+
|
1368 |
+
@_invalidates_cache
|
1369 |
+
def exrule(self, exrule):
|
1370 |
+
""" Include the given rrule instance in the recurrence set exclusion
|
1371 |
+
list. Dates which are part of the given recurrence rules will not
|
1372 |
+
be generated, even if some inclusive rrule or rdate matches them.
|
1373 |
+
"""
|
1374 |
+
self._exrule.append(exrule)
|
1375 |
+
|
1376 |
+
@_invalidates_cache
|
1377 |
+
def exdate(self, exdate):
|
1378 |
+
""" Include the given datetime instance in the recurrence set
|
1379 |
+
exclusion list. Dates included that way will not be generated,
|
1380 |
+
even if some inclusive rrule or rdate matches them. """
|
1381 |
+
self._exdate.append(exdate)
|
1382 |
+
|
1383 |
+
def _iter(self):
|
1384 |
+
rlist = []
|
1385 |
+
self._rdate.sort()
|
1386 |
+
self._genitem(rlist, iter(self._rdate))
|
1387 |
+
for gen in [iter(x) for x in self._rrule]:
|
1388 |
+
self._genitem(rlist, gen)
|
1389 |
+
exlist = []
|
1390 |
+
self._exdate.sort()
|
1391 |
+
self._genitem(exlist, iter(self._exdate))
|
1392 |
+
for gen in [iter(x) for x in self._exrule]:
|
1393 |
+
self._genitem(exlist, gen)
|
1394 |
+
lastdt = None
|
1395 |
+
total = 0
|
1396 |
+
heapq.heapify(rlist)
|
1397 |
+
heapq.heapify(exlist)
|
1398 |
+
while rlist:
|
1399 |
+
ritem = rlist[0]
|
1400 |
+
if not lastdt or lastdt != ritem.dt:
|
1401 |
+
while exlist and exlist[0] < ritem:
|
1402 |
+
exitem = exlist[0]
|
1403 |
+
advance_iterator(exitem)
|
1404 |
+
if exlist and exlist[0] is exitem:
|
1405 |
+
heapq.heapreplace(exlist, exitem)
|
1406 |
+
if not exlist or ritem != exlist[0]:
|
1407 |
+
total += 1
|
1408 |
+
yield ritem.dt
|
1409 |
+
lastdt = ritem.dt
|
1410 |
+
advance_iterator(ritem)
|
1411 |
+
if rlist and rlist[0] is ritem:
|
1412 |
+
heapq.heapreplace(rlist, ritem)
|
1413 |
+
self._len = total
|
1414 |
+
|
1415 |
+
|
1416 |
+
|
1417 |
+
|
1418 |
+
class _rrulestr(object):
|
1419 |
+
""" Parses a string representation of a recurrence rule or set of
|
1420 |
+
recurrence rules.
|
1421 |
+
|
1422 |
+
:param s:
|
1423 |
+
Required, a string defining one or more recurrence rules.
|
1424 |
+
|
1425 |
+
:param dtstart:
|
1426 |
+
If given, used as the default recurrence start if not specified in the
|
1427 |
+
rule string.
|
1428 |
+
|
1429 |
+
:param cache:
|
1430 |
+
If set ``True`` caching of results will be enabled, improving
|
1431 |
+
performance of multiple queries considerably.
|
1432 |
+
|
1433 |
+
:param unfold:
|
1434 |
+
If set ``True`` indicates that a rule string is split over more
|
1435 |
+
than one line and should be joined before processing.
|
1436 |
+
|
1437 |
+
:param forceset:
|
1438 |
+
If set ``True`` forces a :class:`dateutil.rrule.rruleset` to
|
1439 |
+
be returned.
|
1440 |
+
|
1441 |
+
:param compatible:
|
1442 |
+
If set ``True`` forces ``unfold`` and ``forceset`` to be ``True``.
|
1443 |
+
|
1444 |
+
:param ignoretz:
|
1445 |
+
If set ``True``, time zones in parsed strings are ignored and a naive
|
1446 |
+
:class:`datetime.datetime` object is returned.
|
1447 |
+
|
1448 |
+
:param tzids:
|
1449 |
+
If given, a callable or mapping used to retrieve a
|
1450 |
+
:class:`datetime.tzinfo` from a string representation.
|
1451 |
+
Defaults to :func:`dateutil.tz.gettz`.
|
1452 |
+
|
1453 |
+
:param tzinfos:
|
1454 |
+
Additional time zone names / aliases which may be present in a string
|
1455 |
+
representation. See :func:`dateutil.parser.parse` for more
|
1456 |
+
information.
|
1457 |
+
|
1458 |
+
:return:
|
1459 |
+
Returns a :class:`dateutil.rrule.rruleset` or
|
1460 |
+
:class:`dateutil.rrule.rrule`
|
1461 |
+
"""
|
1462 |
+
|
1463 |
+
_freq_map = {"YEARLY": YEARLY,
|
1464 |
+
"MONTHLY": MONTHLY,
|
1465 |
+
"WEEKLY": WEEKLY,
|
1466 |
+
"DAILY": DAILY,
|
1467 |
+
"HOURLY": HOURLY,
|
1468 |
+
"MINUTELY": MINUTELY,
|
1469 |
+
"SECONDLY": SECONDLY}
|
1470 |
+
|
1471 |
+
_weekday_map = {"MO": 0, "TU": 1, "WE": 2, "TH": 3,
|
1472 |
+
"FR": 4, "SA": 5, "SU": 6}
|
1473 |
+
|
1474 |
+
def _handle_int(self, rrkwargs, name, value, **kwargs):
|
1475 |
+
rrkwargs[name.lower()] = int(value)
|
1476 |
+
|
1477 |
+
def _handle_int_list(self, rrkwargs, name, value, **kwargs):
|
1478 |
+
rrkwargs[name.lower()] = [int(x) for x in value.split(',')]
|
1479 |
+
|
1480 |
+
_handle_INTERVAL = _handle_int
|
1481 |
+
_handle_COUNT = _handle_int
|
1482 |
+
_handle_BYSETPOS = _handle_int_list
|
1483 |
+
_handle_BYMONTH = _handle_int_list
|
1484 |
+
_handle_BYMONTHDAY = _handle_int_list
|
1485 |
+
_handle_BYYEARDAY = _handle_int_list
|
1486 |
+
_handle_BYEASTER = _handle_int_list
|
1487 |
+
_handle_BYWEEKNO = _handle_int_list
|
1488 |
+
_handle_BYHOUR = _handle_int_list
|
1489 |
+
_handle_BYMINUTE = _handle_int_list
|
1490 |
+
_handle_BYSECOND = _handle_int_list
|
1491 |
+
|
1492 |
+
def _handle_FREQ(self, rrkwargs, name, value, **kwargs):
|
1493 |
+
rrkwargs["freq"] = self._freq_map[value]
|
1494 |
+
|
1495 |
+
def _handle_UNTIL(self, rrkwargs, name, value, **kwargs):
|
1496 |
+
global parser
|
1497 |
+
if not parser:
|
1498 |
+
from dateutil import parser
|
1499 |
+
try:
|
1500 |
+
rrkwargs["until"] = parser.parse(value,
|
1501 |
+
ignoretz=kwargs.get("ignoretz"),
|
1502 |
+
tzinfos=kwargs.get("tzinfos"))
|
1503 |
+
except ValueError:
|
1504 |
+
raise ValueError("invalid until date")
|
1505 |
+
|
1506 |
+
def _handle_WKST(self, rrkwargs, name, value, **kwargs):
|
1507 |
+
rrkwargs["wkst"] = self._weekday_map[value]
|
1508 |
+
|
1509 |
+
def _handle_BYWEEKDAY(self, rrkwargs, name, value, **kwargs):
|
1510 |
+
"""
|
1511 |
+
Two ways to specify this: +1MO or MO(+1)
|
1512 |
+
"""
|
1513 |
+
l = []
|
1514 |
+
for wday in value.split(','):
|
1515 |
+
if '(' in wday:
|
1516 |
+
# If it's of the form TH(+1), etc.
|
1517 |
+
splt = wday.split('(')
|
1518 |
+
w = splt[0]
|
1519 |
+
n = int(splt[1][:-1])
|
1520 |
+
elif len(wday):
|
1521 |
+
# If it's of the form +1MO
|
1522 |
+
for i in range(len(wday)):
|
1523 |
+
if wday[i] not in '+-0123456789':
|
1524 |
+
break
|
1525 |
+
n = wday[:i] or None
|
1526 |
+
w = wday[i:]
|
1527 |
+
if n:
|
1528 |
+
n = int(n)
|
1529 |
+
else:
|
1530 |
+
raise ValueError("Invalid (empty) BYDAY specification.")
|
1531 |
+
|
1532 |
+
l.append(weekdays[self._weekday_map[w]](n))
|
1533 |
+
rrkwargs["byweekday"] = l
|
1534 |
+
|
1535 |
+
_handle_BYDAY = _handle_BYWEEKDAY
|
1536 |
+
|
1537 |
+
def _parse_rfc_rrule(self, line,
|
1538 |
+
dtstart=None,
|
1539 |
+
cache=False,
|
1540 |
+
ignoretz=False,
|
1541 |
+
tzinfos=None):
|
1542 |
+
if line.find(':') != -1:
|
1543 |
+
name, value = line.split(':')
|
1544 |
+
if name != "RRULE":
|
1545 |
+
raise ValueError("unknown parameter name")
|
1546 |
+
else:
|
1547 |
+
value = line
|
1548 |
+
rrkwargs = {}
|
1549 |
+
for pair in value.split(';'):
|
1550 |
+
name, value = pair.split('=')
|
1551 |
+
name = name.upper()
|
1552 |
+
value = value.upper()
|
1553 |
+
try:
|
1554 |
+
getattr(self, "_handle_"+name)(rrkwargs, name, value,
|
1555 |
+
ignoretz=ignoretz,
|
1556 |
+
tzinfos=tzinfos)
|
1557 |
+
except AttributeError:
|
1558 |
+
raise ValueError("unknown parameter '%s'" % name)
|
1559 |
+
except (KeyError, ValueError):
|
1560 |
+
raise ValueError("invalid '%s': %s" % (name, value))
|
1561 |
+
return rrule(dtstart=dtstart, cache=cache, **rrkwargs)
|
1562 |
+
|
1563 |
+
def _parse_date_value(self, date_value, parms, rule_tzids,
|
1564 |
+
ignoretz, tzids, tzinfos):
|
1565 |
+
global parser
|
1566 |
+
if not parser:
|
1567 |
+
from dateutil import parser
|
1568 |
+
|
1569 |
+
datevals = []
|
1570 |
+
value_found = False
|
1571 |
+
TZID = None
|
1572 |
+
|
1573 |
+
for parm in parms:
|
1574 |
+
if parm.startswith("TZID="):
|
1575 |
+
try:
|
1576 |
+
tzkey = rule_tzids[parm.split('TZID=')[-1]]
|
1577 |
+
except KeyError:
|
1578 |
+
continue
|
1579 |
+
if tzids is None:
|
1580 |
+
from . import tz
|
1581 |
+
tzlookup = tz.gettz
|
1582 |
+
elif callable(tzids):
|
1583 |
+
tzlookup = tzids
|
1584 |
+
else:
|
1585 |
+
tzlookup = getattr(tzids, 'get', None)
|
1586 |
+
if tzlookup is None:
|
1587 |
+
msg = ('tzids must be a callable, mapping, or None, '
|
1588 |
+
'not %s' % tzids)
|
1589 |
+
raise ValueError(msg)
|
1590 |
+
|
1591 |
+
TZID = tzlookup(tzkey)
|
1592 |
+
continue
|
1593 |
+
|
1594 |
+
# RFC 5445 3.8.2.4: The VALUE parameter is optional, but may be found
|
1595 |
+
# only once.
|
1596 |
+
if parm not in {"VALUE=DATE-TIME", "VALUE=DATE"}:
|
1597 |
+
raise ValueError("unsupported parm: " + parm)
|
1598 |
+
else:
|
1599 |
+
if value_found:
|
1600 |
+
msg = ("Duplicate value parameter found in: " + parm)
|
1601 |
+
raise ValueError(msg)
|
1602 |
+
value_found = True
|
1603 |
+
|
1604 |
+
for datestr in date_value.split(','):
|
1605 |
+
date = parser.parse(datestr, ignoretz=ignoretz, tzinfos=tzinfos)
|
1606 |
+
if TZID is not None:
|
1607 |
+
if date.tzinfo is None:
|
1608 |
+
date = date.replace(tzinfo=TZID)
|
1609 |
+
else:
|
1610 |
+
raise ValueError('DTSTART/EXDATE specifies multiple timezone')
|
1611 |
+
datevals.append(date)
|
1612 |
+
|
1613 |
+
return datevals
|
1614 |
+
|
1615 |
+
def _parse_rfc(self, s,
|
1616 |
+
dtstart=None,
|
1617 |
+
cache=False,
|
1618 |
+
unfold=False,
|
1619 |
+
forceset=False,
|
1620 |
+
compatible=False,
|
1621 |
+
ignoretz=False,
|
1622 |
+
tzids=None,
|
1623 |
+
tzinfos=None):
|
1624 |
+
global parser
|
1625 |
+
if compatible:
|
1626 |
+
forceset = True
|
1627 |
+
unfold = True
|
1628 |
+
|
1629 |
+
TZID_NAMES = dict(map(
|
1630 |
+
lambda x: (x.upper(), x),
|
1631 |
+
re.findall('TZID=(?P<name>[^:]+):', s)
|
1632 |
+
))
|
1633 |
+
s = s.upper()
|
1634 |
+
if not s.strip():
|
1635 |
+
raise ValueError("empty string")
|
1636 |
+
if unfold:
|
1637 |
+
lines = s.splitlines()
|
1638 |
+
i = 0
|
1639 |
+
while i < len(lines):
|
1640 |
+
line = lines[i].rstrip()
|
1641 |
+
if not line:
|
1642 |
+
del lines[i]
|
1643 |
+
elif i > 0 and line[0] == " ":
|
1644 |
+
lines[i-1] += line[1:]
|
1645 |
+
del lines[i]
|
1646 |
+
else:
|
1647 |
+
i += 1
|
1648 |
+
else:
|
1649 |
+
lines = s.split()
|
1650 |
+
if (not forceset and len(lines) == 1 and (s.find(':') == -1 or
|
1651 |
+
s.startswith('RRULE:'))):
|
1652 |
+
return self._parse_rfc_rrule(lines[0], cache=cache,
|
1653 |
+
dtstart=dtstart, ignoretz=ignoretz,
|
1654 |
+
tzinfos=tzinfos)
|
1655 |
+
else:
|
1656 |
+
rrulevals = []
|
1657 |
+
rdatevals = []
|
1658 |
+
exrulevals = []
|
1659 |
+
exdatevals = []
|
1660 |
+
for line in lines:
|
1661 |
+
if not line:
|
1662 |
+
continue
|
1663 |
+
if line.find(':') == -1:
|
1664 |
+
name = "RRULE"
|
1665 |
+
value = line
|
1666 |
+
else:
|
1667 |
+
name, value = line.split(':', 1)
|
1668 |
+
parms = name.split(';')
|
1669 |
+
if not parms:
|
1670 |
+
raise ValueError("empty property name")
|
1671 |
+
name = parms[0]
|
1672 |
+
parms = parms[1:]
|
1673 |
+
if name == "RRULE":
|
1674 |
+
for parm in parms:
|
1675 |
+
raise ValueError("unsupported RRULE parm: "+parm)
|
1676 |
+
rrulevals.append(value)
|
1677 |
+
elif name == "RDATE":
|
1678 |
+
for parm in parms:
|
1679 |
+
if parm != "VALUE=DATE-TIME":
|
1680 |
+
raise ValueError("unsupported RDATE parm: "+parm)
|
1681 |
+
rdatevals.append(value)
|
1682 |
+
elif name == "EXRULE":
|
1683 |
+
for parm in parms:
|
1684 |
+
raise ValueError("unsupported EXRULE parm: "+parm)
|
1685 |
+
exrulevals.append(value)
|
1686 |
+
elif name == "EXDATE":
|
1687 |
+
exdatevals.extend(
|
1688 |
+
self._parse_date_value(value, parms,
|
1689 |
+
TZID_NAMES, ignoretz,
|
1690 |
+
tzids, tzinfos)
|
1691 |
+
)
|
1692 |
+
elif name == "DTSTART":
|
1693 |
+
dtvals = self._parse_date_value(value, parms, TZID_NAMES,
|
1694 |
+
ignoretz, tzids, tzinfos)
|
1695 |
+
if len(dtvals) != 1:
|
1696 |
+
raise ValueError("Multiple DTSTART values specified:" +
|
1697 |
+
value)
|
1698 |
+
dtstart = dtvals[0]
|
1699 |
+
else:
|
1700 |
+
raise ValueError("unsupported property: "+name)
|
1701 |
+
if (forceset or len(rrulevals) > 1 or rdatevals
|
1702 |
+
or exrulevals or exdatevals):
|
1703 |
+
if not parser and (rdatevals or exdatevals):
|
1704 |
+
from dateutil import parser
|
1705 |
+
rset = rruleset(cache=cache)
|
1706 |
+
for value in rrulevals:
|
1707 |
+
rset.rrule(self._parse_rfc_rrule(value, dtstart=dtstart,
|
1708 |
+
ignoretz=ignoretz,
|
1709 |
+
tzinfos=tzinfos))
|
1710 |
+
for value in rdatevals:
|
1711 |
+
for datestr in value.split(','):
|
1712 |
+
rset.rdate(parser.parse(datestr,
|
1713 |
+
ignoretz=ignoretz,
|
1714 |
+
tzinfos=tzinfos))
|
1715 |
+
for value in exrulevals:
|
1716 |
+
rset.exrule(self._parse_rfc_rrule(value, dtstart=dtstart,
|
1717 |
+
ignoretz=ignoretz,
|
1718 |
+
tzinfos=tzinfos))
|
1719 |
+
for value in exdatevals:
|
1720 |
+
rset.exdate(value)
|
1721 |
+
if compatible and dtstart:
|
1722 |
+
rset.rdate(dtstart)
|
1723 |
+
return rset
|
1724 |
+
else:
|
1725 |
+
return self._parse_rfc_rrule(rrulevals[0],
|
1726 |
+
dtstart=dtstart,
|
1727 |
+
cache=cache,
|
1728 |
+
ignoretz=ignoretz,
|
1729 |
+
tzinfos=tzinfos)
|
1730 |
+
|
1731 |
+
def __call__(self, s, **kwargs):
|
1732 |
+
return self._parse_rfc(s, **kwargs)
|
1733 |
+
|
1734 |
+
|
1735 |
+
rrulestr = _rrulestr()
|
1736 |
+
|
1737 |
+
# vim:ts=4:sw=4:et
|
llmeval-env/lib/python3.10/site-packages/isympy.py
ADDED
@@ -0,0 +1,342 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Python shell for SymPy.
|
3 |
+
|
4 |
+
This is just a normal Python shell (IPython shell if you have the
|
5 |
+
IPython package installed), that executes the following commands for
|
6 |
+
the user:
|
7 |
+
|
8 |
+
>>> from __future__ import division
|
9 |
+
>>> from sympy import *
|
10 |
+
>>> x, y, z, t = symbols('x y z t')
|
11 |
+
>>> k, m, n = symbols('k m n', integer=True)
|
12 |
+
>>> f, g, h = symbols('f g h', cls=Function)
|
13 |
+
>>> init_printing()
|
14 |
+
|
15 |
+
So starting 'isympy' is equivalent to starting Python (or IPython) and
|
16 |
+
executing the above commands by hand. It is intended for easy and quick
|
17 |
+
experimentation with SymPy. isympy is a good way to use SymPy as an
|
18 |
+
interactive calculator. If you have IPython and Matplotlib installed, then
|
19 |
+
interactive plotting is enabled by default.
|
20 |
+
|
21 |
+
COMMAND LINE OPTIONS
|
22 |
+
--------------------
|
23 |
+
|
24 |
+
-c CONSOLE, --console=CONSOLE
|
25 |
+
|
26 |
+
Use the specified shell (Python or IPython) shell as the console
|
27 |
+
backend instead of the default one (IPython if present, Python
|
28 |
+
otherwise), e.g.:
|
29 |
+
|
30 |
+
$isympy -c python
|
31 |
+
|
32 |
+
CONSOLE must be one of 'ipython' or 'python'
|
33 |
+
|
34 |
+
-p PRETTY, --pretty PRETTY
|
35 |
+
|
36 |
+
Setup pretty-printing in SymPy. When pretty-printing is enabled,
|
37 |
+
expressions can be printed with Unicode or ASCII. The default is
|
38 |
+
to use pretty-printing (with Unicode if the terminal supports it).
|
39 |
+
When this option is 'no', expressions will not be pretty-printed
|
40 |
+
and ASCII will be used:
|
41 |
+
|
42 |
+
$isympy -p no
|
43 |
+
|
44 |
+
PRETTY must be one of 'unicode', 'ascii', or 'no'
|
45 |
+
|
46 |
+
-t TYPES, --types=TYPES
|
47 |
+
|
48 |
+
Setup the ground types for the polys. By default, gmpy ground types
|
49 |
+
are used if gmpy2 or gmpy is installed, otherwise it falls back to python
|
50 |
+
ground types, which are a little bit slower. You can manually
|
51 |
+
choose python ground types even if gmpy is installed (e.g., for
|
52 |
+
testing purposes):
|
53 |
+
|
54 |
+
$isympy -t python
|
55 |
+
|
56 |
+
TYPES must be one of 'gmpy', 'gmpy1' or 'python'
|
57 |
+
|
58 |
+
Note that the ground type gmpy1 is primarily intended for testing; it
|
59 |
+
forces the use of gmpy version 1 even if gmpy2 is available.
|
60 |
+
|
61 |
+
This is the same as setting the environment variable
|
62 |
+
SYMPY_GROUND_TYPES to the given ground type (e.g.,
|
63 |
+
SYMPY_GROUND_TYPES='gmpy')
|
64 |
+
|
65 |
+
The ground types can be determined interactively from the variable
|
66 |
+
sympy.polys.domains.GROUND_TYPES.
|
67 |
+
|
68 |
+
-o ORDER, --order ORDER
|
69 |
+
|
70 |
+
Setup the ordering of terms for printing. The default is lex, which
|
71 |
+
orders terms lexicographically (e.g., x**2 + x + 1). You can choose
|
72 |
+
other orderings, such as rev-lex, which will use reverse
|
73 |
+
lexicographic ordering (e.g., 1 + x + x**2):
|
74 |
+
|
75 |
+
$isympy -o rev-lex
|
76 |
+
|
77 |
+
ORDER must be one of 'lex', 'rev-lex', 'grlex', 'rev-grlex',
|
78 |
+
'grevlex', 'rev-grevlex', 'old', or 'none'.
|
79 |
+
|
80 |
+
Note that for very large expressions, ORDER='none' may speed up
|
81 |
+
printing considerably but the terms will have no canonical order.
|
82 |
+
|
83 |
+
-q, --quiet
|
84 |
+
|
85 |
+
Print only Python's and SymPy's versions to stdout at startup.
|
86 |
+
|
87 |
+
-d, --doctest
|
88 |
+
|
89 |
+
Use the same format that should be used for doctests. This is
|
90 |
+
equivalent to -c python -p no.
|
91 |
+
|
92 |
+
-C, --no-cache
|
93 |
+
|
94 |
+
Disable the caching mechanism. Disabling the cache may slow certain
|
95 |
+
operations down considerably. This is useful for testing the cache,
|
96 |
+
or for benchmarking, as the cache can result in deceptive timings.
|
97 |
+
|
98 |
+
This is equivalent to setting the environment variable
|
99 |
+
SYMPY_USE_CACHE to 'no'.
|
100 |
+
|
101 |
+
-a, --auto-symbols (requires at least IPython 0.11)
|
102 |
+
|
103 |
+
Automatically create missing symbols. Normally, typing a name of a
|
104 |
+
Symbol that has not been instantiated first would raise NameError,
|
105 |
+
but with this option enabled, any undefined name will be
|
106 |
+
automatically created as a Symbol.
|
107 |
+
|
108 |
+
Note that this is intended only for interactive, calculator style
|
109 |
+
usage. In a script that uses SymPy, Symbols should be instantiated
|
110 |
+
at the top, so that it's clear what they are.
|
111 |
+
|
112 |
+
This will not override any names that are already defined, which
|
113 |
+
includes the single character letters represented by the mnemonic
|
114 |
+
QCOSINE (see the "Gotchas and Pitfalls" document in the
|
115 |
+
documentation). You can delete existing names by executing "del
|
116 |
+
name". If a name is defined, typing "'name' in dir()" will return True.
|
117 |
+
|
118 |
+
The Symbols that are created using this have default assumptions.
|
119 |
+
If you want to place assumptions on symbols, you should create them
|
120 |
+
using symbols() or var().
|
121 |
+
|
122 |
+
Finally, this only works in the top level namespace. So, for
|
123 |
+
example, if you define a function in isympy with an undefined
|
124 |
+
Symbol, it will not work.
|
125 |
+
|
126 |
+
See also the -i and -I options.
|
127 |
+
|
128 |
+
-i, --int-to-Integer (requires at least IPython 0.11)
|
129 |
+
|
130 |
+
Automatically wrap int literals with Integer. This makes it so that
|
131 |
+
things like 1/2 will come out as Rational(1, 2), rather than 0.5. This
|
132 |
+
works by preprocessing the source and wrapping all int literals with
|
133 |
+
Integer. Note that this will not change the behavior of int literals
|
134 |
+
assigned to variables, and it also won't change the behavior of functions
|
135 |
+
that return int literals.
|
136 |
+
|
137 |
+
If you want an int, you can wrap the literal in int(), e.g. int(3)/int(2)
|
138 |
+
gives 1.5 (with division imported from __future__).
|
139 |
+
|
140 |
+
-I, --interactive (requires at least IPython 0.11)
|
141 |
+
|
142 |
+
This is equivalent to --auto-symbols --int-to-Integer. Future options
|
143 |
+
designed for ease of interactive use may be added to this.
|
144 |
+
|
145 |
+
-D, --debug
|
146 |
+
|
147 |
+
Enable debugging output. This is the same as setting the
|
148 |
+
environment variable SYMPY_DEBUG to 'True'. The debug status is set
|
149 |
+
in the variable SYMPY_DEBUG within isympy.
|
150 |
+
|
151 |
+
-- IPython options
|
152 |
+
|
153 |
+
Additionally you can pass command line options directly to the IPython
|
154 |
+
interpreter (the standard Python shell is not supported). However you
|
155 |
+
need to add the '--' separator between two types of options, e.g the
|
156 |
+
startup banner option and the colors option. You need to enter the
|
157 |
+
options as required by the version of IPython that you are using, too:
|
158 |
+
|
159 |
+
in IPython 0.11,
|
160 |
+
|
161 |
+
$isympy -q -- --colors=NoColor
|
162 |
+
|
163 |
+
or older versions of IPython,
|
164 |
+
|
165 |
+
$isympy -q -- -colors NoColor
|
166 |
+
|
167 |
+
See also isympy --help.
|
168 |
+
"""
|
169 |
+
|
170 |
+
import os
|
171 |
+
import sys
|
172 |
+
|
173 |
+
# DO NOT IMPORT SYMPY HERE! Or the setting of the sympy environment variables
|
174 |
+
# by the command line will break.
|
175 |
+
|
176 |
+
def main() -> None:
|
177 |
+
from argparse import ArgumentParser, RawDescriptionHelpFormatter
|
178 |
+
|
179 |
+
VERSION = None
|
180 |
+
if '--version' in sys.argv:
|
181 |
+
# We cannot import sympy before this is run, because flags like -C and
|
182 |
+
# -t set environment variables that must be set before SymPy is
|
183 |
+
# imported. The only thing we need to import it for is to get the
|
184 |
+
# version, which only matters with the --version flag.
|
185 |
+
import sympy
|
186 |
+
VERSION = sympy.__version__
|
187 |
+
|
188 |
+
usage = 'isympy [options] -- [ipython options]'
|
189 |
+
parser = ArgumentParser(
|
190 |
+
usage=usage,
|
191 |
+
description=__doc__,
|
192 |
+
formatter_class=RawDescriptionHelpFormatter,
|
193 |
+
)
|
194 |
+
|
195 |
+
parser.add_argument('--version', action='version', version=VERSION)
|
196 |
+
|
197 |
+
parser.add_argument(
|
198 |
+
'-c', '--console',
|
199 |
+
dest='console',
|
200 |
+
action='store',
|
201 |
+
default=None,
|
202 |
+
choices=['ipython', 'python'],
|
203 |
+
metavar='CONSOLE',
|
204 |
+
help='select type of interactive session: ipython | python; defaults '
|
205 |
+
'to ipython if IPython is installed, otherwise python')
|
206 |
+
|
207 |
+
parser.add_argument(
|
208 |
+
'-p', '--pretty',
|
209 |
+
dest='pretty',
|
210 |
+
action='store',
|
211 |
+
default=None,
|
212 |
+
metavar='PRETTY',
|
213 |
+
choices=['unicode', 'ascii', 'no'],
|
214 |
+
help='setup pretty printing: unicode | ascii | no; defaults to '
|
215 |
+
'unicode printing if the terminal supports it, otherwise ascii')
|
216 |
+
|
217 |
+
parser.add_argument(
|
218 |
+
'-t', '--types',
|
219 |
+
dest='types',
|
220 |
+
action='store',
|
221 |
+
default=None,
|
222 |
+
metavar='TYPES',
|
223 |
+
choices=['gmpy', 'gmpy1', 'python'],
|
224 |
+
help='setup ground types: gmpy | gmpy1 | python; defaults to gmpy if gmpy2 '
|
225 |
+
'or gmpy is installed, otherwise python')
|
226 |
+
|
227 |
+
parser.add_argument(
|
228 |
+
'-o', '--order',
|
229 |
+
dest='order',
|
230 |
+
action='store',
|
231 |
+
default=None,
|
232 |
+
metavar='ORDER',
|
233 |
+
choices=['lex', 'grlex', 'grevlex', 'rev-lex', 'rev-grlex', 'rev-grevlex', 'old', 'none'],
|
234 |
+
help='setup ordering of terms: [rev-]lex | [rev-]grlex | [rev-]grevlex | old | none; defaults to lex')
|
235 |
+
|
236 |
+
parser.add_argument(
|
237 |
+
'-q', '--quiet',
|
238 |
+
dest='quiet',
|
239 |
+
action='store_true',
|
240 |
+
default=False,
|
241 |
+
help='print only version information at startup')
|
242 |
+
|
243 |
+
parser.add_argument(
|
244 |
+
'-d', '--doctest',
|
245 |
+
dest='doctest',
|
246 |
+
action='store_true',
|
247 |
+
default=False,
|
248 |
+
help='use the doctest format for output (you can just copy and paste it)')
|
249 |
+
|
250 |
+
parser.add_argument(
|
251 |
+
'-C', '--no-cache',
|
252 |
+
dest='cache',
|
253 |
+
action='store_false',
|
254 |
+
default=True,
|
255 |
+
help='disable caching mechanism')
|
256 |
+
|
257 |
+
parser.add_argument(
|
258 |
+
'-a', '--auto-symbols',
|
259 |
+
dest='auto_symbols',
|
260 |
+
action='store_true',
|
261 |
+
default=False,
|
262 |
+
help='automatically construct missing symbols')
|
263 |
+
|
264 |
+
parser.add_argument(
|
265 |
+
'-i', '--int-to-Integer',
|
266 |
+
dest='auto_int_to_Integer',
|
267 |
+
action='store_true',
|
268 |
+
default=False,
|
269 |
+
help="automatically wrap int literals with Integer")
|
270 |
+
|
271 |
+
parser.add_argument(
|
272 |
+
'-I', '--interactive',
|
273 |
+
dest='interactive',
|
274 |
+
action='store_true',
|
275 |
+
default=False,
|
276 |
+
help="equivalent to -a -i")
|
277 |
+
|
278 |
+
parser.add_argument(
|
279 |
+
'-D', '--debug',
|
280 |
+
dest='debug',
|
281 |
+
action='store_true',
|
282 |
+
default=False,
|
283 |
+
help='enable debugging output')
|
284 |
+
|
285 |
+
(options, ipy_args) = parser.parse_known_args()
|
286 |
+
if '--' in ipy_args:
|
287 |
+
ipy_args.remove('--')
|
288 |
+
|
289 |
+
if not options.cache:
|
290 |
+
os.environ['SYMPY_USE_CACHE'] = 'no'
|
291 |
+
|
292 |
+
if options.types:
|
293 |
+
os.environ['SYMPY_GROUND_TYPES'] = options.types
|
294 |
+
|
295 |
+
if options.debug:
|
296 |
+
os.environ['SYMPY_DEBUG'] = str(options.debug)
|
297 |
+
|
298 |
+
if options.doctest:
|
299 |
+
options.pretty = 'no'
|
300 |
+
options.console = 'python'
|
301 |
+
|
302 |
+
session = options.console
|
303 |
+
|
304 |
+
if session is not None:
|
305 |
+
ipython = session == 'ipython'
|
306 |
+
else:
|
307 |
+
try:
|
308 |
+
import IPython
|
309 |
+
ipython = True
|
310 |
+
except ImportError:
|
311 |
+
if not options.quiet:
|
312 |
+
from sympy.interactive.session import no_ipython
|
313 |
+
print(no_ipython)
|
314 |
+
ipython = False
|
315 |
+
|
316 |
+
args = {
|
317 |
+
'pretty_print': True,
|
318 |
+
'use_unicode': None,
|
319 |
+
'use_latex': None,
|
320 |
+
'order': None,
|
321 |
+
'argv': ipy_args,
|
322 |
+
}
|
323 |
+
|
324 |
+
if options.pretty == 'unicode':
|
325 |
+
args['use_unicode'] = True
|
326 |
+
elif options.pretty == 'ascii':
|
327 |
+
args['use_unicode'] = False
|
328 |
+
elif options.pretty == 'no':
|
329 |
+
args['pretty_print'] = False
|
330 |
+
|
331 |
+
if options.order is not None:
|
332 |
+
args['order'] = options.order
|
333 |
+
|
334 |
+
args['quiet'] = options.quiet
|
335 |
+
args['auto_symbols'] = options.auto_symbols or options.interactive
|
336 |
+
args['auto_int_to_Integer'] = options.auto_int_to_Integer or options.interactive
|
337 |
+
|
338 |
+
from sympy.interactive import init_session
|
339 |
+
init_session(ipython, **args)
|
340 |
+
|
341 |
+
if __name__ == "__main__":
|
342 |
+
main()
|
llmeval-env/lib/python3.10/site-packages/pip/py.typed
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
pip is a command line program. While it is implemented in Python, and so is
|
2 |
+
available for import, you must not use pip's internal APIs in this way. Typing
|
3 |
+
information is provided as a convenience only and is not a guarantee. Expect
|
4 |
+
unannounced changes to the API and types in releases.
|
llmeval-env/lib/python3.10/site-packages/sqlitedict.py
ADDED
@@ -0,0 +1,697 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
#
|
4 |
+
# This code is distributed under the terms and conditions
|
5 |
+
# from the Apache License, Version 2.0
|
6 |
+
#
|
7 |
+
# http://opensource.org/licenses/apache2.0.php
|
8 |
+
#
|
9 |
+
# This code was inspired by:
|
10 |
+
# * http://code.activestate.com/recipes/576638-draft-for-an-sqlite3-based-dbm/
|
11 |
+
# * http://code.activestate.com/recipes/526618/
|
12 |
+
|
13 |
+
"""
|
14 |
+
A lightweight wrapper around Python's sqlite3 database, with a dict-like interface
|
15 |
+
and multi-thread access support::
|
16 |
+
|
17 |
+
>>> mydict = SqliteDict('some.db', autocommit=True) # the mapping will be persisted to file `some.db`
|
18 |
+
>>> mydict['some_key'] = any_picklable_object
|
19 |
+
>>> print mydict['some_key']
|
20 |
+
>>> print len(mydict) # etc... all dict functions work
|
21 |
+
|
22 |
+
Pickle is used internally to serialize the values. Keys are strings.
|
23 |
+
|
24 |
+
If you don't use autocommit (default is no autocommit for performance), then
|
25 |
+
don't forget to call `mydict.commit()` when done with a transaction.
|
26 |
+
|
27 |
+
"""
|
28 |
+
|
29 |
+
import sqlite3
|
30 |
+
import os
|
31 |
+
import sys
|
32 |
+
import tempfile
|
33 |
+
import threading
|
34 |
+
import logging
|
35 |
+
import traceback
|
36 |
+
from base64 import b64decode, b64encode
|
37 |
+
import weakref
|
38 |
+
|
39 |
+
__version__ = '2.1.0'
|
40 |
+
|
41 |
+
|
42 |
+
def reraise(tp, value, tb=None):
|
43 |
+
if value is None:
|
44 |
+
value = tp()
|
45 |
+
if value.__traceback__ is not tb:
|
46 |
+
raise value.with_traceback(tb)
|
47 |
+
raise value
|
48 |
+
|
49 |
+
|
50 |
+
try:
|
51 |
+
from cPickle import dumps, loads, HIGHEST_PROTOCOL as PICKLE_PROTOCOL
|
52 |
+
except ImportError:
|
53 |
+
from pickle import dumps, loads, HIGHEST_PROTOCOL as PICKLE_PROTOCOL
|
54 |
+
|
55 |
+
# some Python 3 vs 2 imports
|
56 |
+
try:
|
57 |
+
from collections import UserDict as DictClass
|
58 |
+
except ImportError:
|
59 |
+
from UserDict import DictMixin as DictClass
|
60 |
+
|
61 |
+
try:
|
62 |
+
from queue import Queue
|
63 |
+
except ImportError:
|
64 |
+
from Queue import Queue
|
65 |
+
|
66 |
+
|
67 |
+
logger = logging.getLogger(__name__)
|
68 |
+
|
69 |
+
#
|
70 |
+
# There's a thread that holds the actual SQL connection (SqliteMultithread).
|
71 |
+
# We communicate with this thread via queues (request and responses).
|
72 |
+
# The requests can either be SQL commands or one of the "special" commands
|
73 |
+
# below:
|
74 |
+
#
|
75 |
+
# _REQUEST_CLOSE: request that the SQL connection be closed
|
76 |
+
# _REQUEST_COMMIT: request that any changes be committed to the DB
|
77 |
+
#
|
78 |
+
# Responses are either SQL records (e.g. results of a SELECT) or the magic
|
79 |
+
# _RESPONSE_NO_MORE command, which indicates nothing else will ever be written
|
80 |
+
# to the response queue.
|
81 |
+
#
|
82 |
+
_REQUEST_CLOSE = '--close--'
|
83 |
+
_REQUEST_COMMIT = '--commit--'
|
84 |
+
_RESPONSE_NO_MORE = '--no more--'
|
85 |
+
|
86 |
+
#
|
87 |
+
# We work with weak references for better memory efficiency.
|
88 |
+
# Dereferencing, checking the referent queue still exists, and putting to it
|
89 |
+
# is boring and repetitive, so we have a _put function to handle it for us.
|
90 |
+
#
|
91 |
+
_PUT_OK, _PUT_REFERENT_DESTROYED, _PUT_NOOP = 0, 1, 2
|
92 |
+
|
93 |
+
|
94 |
+
def _put(queue_reference, item):
|
95 |
+
if queue_reference is not None:
|
96 |
+
queue = queue_reference()
|
97 |
+
if queue is None:
|
98 |
+
#
|
99 |
+
# We got a reference to a queue, but that queue no longer exists
|
100 |
+
#
|
101 |
+
retval = _PUT_REFERENT_DESTROYED
|
102 |
+
else:
|
103 |
+
queue.put(item)
|
104 |
+
retval = _PUT_OK
|
105 |
+
|
106 |
+
del queue
|
107 |
+
return retval
|
108 |
+
|
109 |
+
#
|
110 |
+
# We didn't get a reference to a queue, so do nothing (no-op).
|
111 |
+
#
|
112 |
+
return _PUT_NOOP
|
113 |
+
|
114 |
+
|
115 |
+
def open(*args, **kwargs):
|
116 |
+
"""See documentation of the SqliteDict class."""
|
117 |
+
return SqliteDict(*args, **kwargs)
|
118 |
+
|
119 |
+
|
120 |
+
def encode(obj):
|
121 |
+
"""Serialize an object using pickle to a binary format accepted by SQLite."""
|
122 |
+
return sqlite3.Binary(dumps(obj, protocol=PICKLE_PROTOCOL))
|
123 |
+
|
124 |
+
|
125 |
+
def decode(obj):
|
126 |
+
"""Deserialize objects retrieved from SQLite."""
|
127 |
+
return loads(bytes(obj))
|
128 |
+
|
129 |
+
|
130 |
+
def encode_key(key):
|
131 |
+
"""Serialize a key using pickle + base64 encoding to text accepted by SQLite."""
|
132 |
+
return b64encode(dumps(key, protocol=PICKLE_PROTOCOL)).decode("ascii")
|
133 |
+
|
134 |
+
|
135 |
+
def decode_key(key):
|
136 |
+
"""Deserialize a key retrieved from SQLite."""
|
137 |
+
return loads(b64decode(key.encode("ascii")))
|
138 |
+
|
139 |
+
|
140 |
+
def identity(obj):
|
141 |
+
"""Identity f(x) = x function for encoding/decoding."""
|
142 |
+
return obj
|
143 |
+
|
144 |
+
|
145 |
+
class SqliteDict(DictClass):
|
146 |
+
VALID_FLAGS = ['c', 'r', 'w', 'n']
|
147 |
+
|
148 |
+
def __init__(self, filename=None, tablename='unnamed', flag='c',
|
149 |
+
autocommit=False, journal_mode="DELETE", encode=encode,
|
150 |
+
decode=decode, encode_key=identity, decode_key=identity,
|
151 |
+
timeout=5, outer_stack=True):
|
152 |
+
"""
|
153 |
+
Initialize a thread-safe sqlite-backed dictionary. The dictionary will
|
154 |
+
be a table `tablename` in database file `filename`. A single file (=database)
|
155 |
+
may contain multiple tables.
|
156 |
+
|
157 |
+
If no `filename` is given, a random file in temp will be used (and deleted
|
158 |
+
from temp once the dict is closed/deleted).
|
159 |
+
|
160 |
+
If you enable `autocommit`, changes will be committed after each operation
|
161 |
+
(more inefficient but safer). Otherwise, changes are committed on `self.commit()`,
|
162 |
+
`self.clear()` and `self.close()`.
|
163 |
+
|
164 |
+
Set `journal_mode` to 'OFF' if you're experiencing sqlite I/O problems
|
165 |
+
or if you need performance and don't care about crash-consistency.
|
166 |
+
|
167 |
+
Set `outer_stack` to False to disable the output of the outer exception
|
168 |
+
to the error logs. This may improve the efficiency of sqlitedict
|
169 |
+
operation at the expense of a detailed exception trace.
|
170 |
+
|
171 |
+
The `flag` parameter. Exactly one of:
|
172 |
+
'c': default mode, open for read/write, creating the db/table if necessary.
|
173 |
+
'w': open for r/w, but drop `tablename` contents first (start with empty table)
|
174 |
+
'r': open as read-only
|
175 |
+
'n': create a new database (erasing any existing tables, not just `tablename`!).
|
176 |
+
|
177 |
+
The `encode` and `decode` parameters are used to customize how the values
|
178 |
+
are serialized and deserialized.
|
179 |
+
The `encode` parameter must be a function that takes a single Python
|
180 |
+
object and returns a serialized representation.
|
181 |
+
The `decode` function must be a function that takes the serialized
|
182 |
+
representation produced by `encode` and returns a deserialized Python
|
183 |
+
object.
|
184 |
+
The default is to use pickle.
|
185 |
+
|
186 |
+
The `timeout` defines the maximum time (in seconds) to wait for initial Thread startup.
|
187 |
+
|
188 |
+
"""
|
189 |
+
self.in_temp = filename is None
|
190 |
+
if self.in_temp:
|
191 |
+
fd, filename = tempfile.mkstemp(prefix='sqldict')
|
192 |
+
os.close(fd)
|
193 |
+
|
194 |
+
if flag not in SqliteDict.VALID_FLAGS:
|
195 |
+
raise RuntimeError("Unrecognized flag: %s" % flag)
|
196 |
+
self.flag = flag
|
197 |
+
|
198 |
+
if flag == 'n':
|
199 |
+
if os.path.exists(filename):
|
200 |
+
os.remove(filename)
|
201 |
+
|
202 |
+
dirname = os.path.dirname(filename)
|
203 |
+
if dirname:
|
204 |
+
if not os.path.exists(dirname):
|
205 |
+
raise RuntimeError('Error! The directory does not exist, %s' % dirname)
|
206 |
+
|
207 |
+
self.filename = filename
|
208 |
+
|
209 |
+
# Use standard SQL escaping of double quote characters in identifiers, by doubling them.
|
210 |
+
# See https://github.com/RaRe-Technologies/sqlitedict/pull/113
|
211 |
+
self.tablename = tablename.replace('"', '""')
|
212 |
+
|
213 |
+
self.autocommit = autocommit
|
214 |
+
self.journal_mode = journal_mode
|
215 |
+
self.encode = encode
|
216 |
+
self.decode = decode
|
217 |
+
self.encode_key = encode_key
|
218 |
+
self.decode_key = decode_key
|
219 |
+
self._outer_stack = outer_stack
|
220 |
+
|
221 |
+
logger.debug("opening Sqlite table %r in %r" % (tablename, filename))
|
222 |
+
self.conn = self._new_conn()
|
223 |
+
if self.flag == 'r':
|
224 |
+
if self.tablename not in SqliteDict.get_tablenames(self.filename):
|
225 |
+
msg = 'Refusing to create a new table "%s" in read-only DB mode' % tablename
|
226 |
+
raise RuntimeError(msg)
|
227 |
+
else:
|
228 |
+
MAKE_TABLE = 'CREATE TABLE IF NOT EXISTS "%s" (key TEXT PRIMARY KEY, value BLOB)' % self.tablename
|
229 |
+
self.conn.execute(MAKE_TABLE)
|
230 |
+
self.conn.commit()
|
231 |
+
if flag == 'w':
|
232 |
+
self.clear()
|
233 |
+
|
234 |
+
def _new_conn(self):
|
235 |
+
return SqliteMultithread(
|
236 |
+
self.filename,
|
237 |
+
autocommit=self.autocommit,
|
238 |
+
journal_mode=self.journal_mode,
|
239 |
+
outer_stack=self._outer_stack,
|
240 |
+
)
|
241 |
+
|
242 |
+
def __enter__(self):
|
243 |
+
if not hasattr(self, 'conn') or self.conn is None:
|
244 |
+
self.conn = self._new_conn()
|
245 |
+
return self
|
246 |
+
|
247 |
+
def __exit__(self, *exc_info):
|
248 |
+
self.close()
|
249 |
+
|
250 |
+
def __str__(self):
|
251 |
+
return "SqliteDict(%s)" % (self.filename)
|
252 |
+
|
253 |
+
def __repr__(self):
|
254 |
+
return str(self) # no need of something complex
|
255 |
+
|
256 |
+
def __len__(self):
|
257 |
+
# `select count (*)` is super slow in sqlite (does a linear scan!!)
|
258 |
+
# As a result, len() is very slow too once the table size grows beyond trivial.
|
259 |
+
# We could keep the total count of rows ourselves, by means of triggers,
|
260 |
+
# but that seems too complicated and would slow down normal operation
|
261 |
+
# (insert/delete etc).
|
262 |
+
GET_LEN = 'SELECT COUNT(*) FROM "%s"' % self.tablename
|
263 |
+
rows = self.conn.select_one(GET_LEN)[0]
|
264 |
+
return rows if rows is not None else 0
|
265 |
+
|
266 |
+
def __bool__(self):
|
267 |
+
# No elements is False, otherwise True
|
268 |
+
GET_MAX = 'SELECT MAX(ROWID) FROM "%s"' % self.tablename
|
269 |
+
m = self.conn.select_one(GET_MAX)[0]
|
270 |
+
# Explicit better than implicit and bla bla
|
271 |
+
return True if m is not None else False
|
272 |
+
|
273 |
+
def iterkeys(self):
|
274 |
+
GET_KEYS = 'SELECT key FROM "%s" ORDER BY rowid' % self.tablename
|
275 |
+
for key in self.conn.select(GET_KEYS):
|
276 |
+
yield self.decode_key(key[0])
|
277 |
+
|
278 |
+
def itervalues(self):
|
279 |
+
GET_VALUES = 'SELECT value FROM "%s" ORDER BY rowid' % self.tablename
|
280 |
+
for value in self.conn.select(GET_VALUES):
|
281 |
+
yield self.decode(value[0])
|
282 |
+
|
283 |
+
def iteritems(self):
|
284 |
+
GET_ITEMS = 'SELECT key, value FROM "%s" ORDER BY rowid' % self.tablename
|
285 |
+
for key, value in self.conn.select(GET_ITEMS):
|
286 |
+
yield self.decode_key(key), self.decode(value)
|
287 |
+
|
288 |
+
def keys(self):
|
289 |
+
return self.iterkeys()
|
290 |
+
|
291 |
+
def values(self):
|
292 |
+
return self.itervalues()
|
293 |
+
|
294 |
+
def items(self):
|
295 |
+
return self.iteritems()
|
296 |
+
|
297 |
+
def __contains__(self, key):
|
298 |
+
HAS_ITEM = 'SELECT 1 FROM "%s" WHERE key = ?' % self.tablename
|
299 |
+
return self.conn.select_one(HAS_ITEM, (self.encode_key(key),)) is not None
|
300 |
+
|
301 |
+
def __getitem__(self, key):
|
302 |
+
GET_ITEM = 'SELECT value FROM "%s" WHERE key = ?' % self.tablename
|
303 |
+
item = self.conn.select_one(GET_ITEM, (self.encode_key(key),))
|
304 |
+
if item is None:
|
305 |
+
raise KeyError(key)
|
306 |
+
return self.decode(item[0])
|
307 |
+
|
308 |
+
def __setitem__(self, key, value):
|
309 |
+
if self.flag == 'r':
|
310 |
+
raise RuntimeError('Refusing to write to read-only SqliteDict')
|
311 |
+
|
312 |
+
ADD_ITEM = 'REPLACE INTO "%s" (key, value) VALUES (?,?)' % self.tablename
|
313 |
+
self.conn.execute(ADD_ITEM, (self.encode_key(key), self.encode(value)))
|
314 |
+
if self.autocommit:
|
315 |
+
self.commit()
|
316 |
+
|
317 |
+
def __delitem__(self, key):
|
318 |
+
if self.flag == 'r':
|
319 |
+
raise RuntimeError('Refusing to delete from read-only SqliteDict')
|
320 |
+
|
321 |
+
if key not in self:
|
322 |
+
raise KeyError(key)
|
323 |
+
DEL_ITEM = 'DELETE FROM "%s" WHERE key = ?' % self.tablename
|
324 |
+
self.conn.execute(DEL_ITEM, (self.encode_key(key),))
|
325 |
+
if self.autocommit:
|
326 |
+
self.commit()
|
327 |
+
|
328 |
+
def update(self, items=(), **kwds):
|
329 |
+
if self.flag == 'r':
|
330 |
+
raise RuntimeError('Refusing to update read-only SqliteDict')
|
331 |
+
|
332 |
+
try:
|
333 |
+
items = items.items()
|
334 |
+
except AttributeError:
|
335 |
+
pass
|
336 |
+
items = [(self.encode_key(k), self.encode(v)) for k, v in items]
|
337 |
+
|
338 |
+
UPDATE_ITEMS = 'REPLACE INTO "%s" (key, value) VALUES (?, ?)' % self.tablename
|
339 |
+
self.conn.executemany(UPDATE_ITEMS, items)
|
340 |
+
if kwds:
|
341 |
+
self.update(kwds)
|
342 |
+
if self.autocommit:
|
343 |
+
self.commit()
|
344 |
+
|
345 |
+
def __iter__(self):
|
346 |
+
return self.iterkeys()
|
347 |
+
|
348 |
+
def clear(self):
|
349 |
+
if self.flag == 'r':
|
350 |
+
raise RuntimeError('Refusing to clear read-only SqliteDict')
|
351 |
+
|
352 |
+
# avoid VACUUM, as it gives "OperationalError: database schema has changed"
|
353 |
+
CLEAR_ALL = 'DELETE FROM "%s";' % self.tablename
|
354 |
+
self.conn.commit()
|
355 |
+
self.conn.execute(CLEAR_ALL)
|
356 |
+
self.conn.commit()
|
357 |
+
|
358 |
+
@staticmethod
|
359 |
+
def get_tablenames(filename):
|
360 |
+
"""get the names of the tables in an sqlite db as a list"""
|
361 |
+
if not os.path.isfile(filename):
|
362 |
+
raise IOError('file %s does not exist' % (filename))
|
363 |
+
GET_TABLENAMES = 'SELECT name FROM sqlite_master WHERE type="table"'
|
364 |
+
with sqlite3.connect(filename) as conn:
|
365 |
+
cursor = conn.execute(GET_TABLENAMES)
|
366 |
+
res = cursor.fetchall()
|
367 |
+
|
368 |
+
return [name[0] for name in res]
|
369 |
+
|
370 |
+
def commit(self, blocking=True):
|
371 |
+
"""
|
372 |
+
Persist all data to disk.
|
373 |
+
|
374 |
+
When `blocking` is False, the commit command is queued, but the data is
|
375 |
+
not guaranteed persisted (default implication when autocommit=True).
|
376 |
+
"""
|
377 |
+
if self.conn is not None:
|
378 |
+
self.conn.commit(blocking)
|
379 |
+
sync = commit
|
380 |
+
|
381 |
+
def close(self, do_log=True, force=False):
|
382 |
+
if do_log:
|
383 |
+
logger.debug("closing %s" % self)
|
384 |
+
if hasattr(self, 'conn') and self.conn is not None:
|
385 |
+
if self.conn.autocommit and not force:
|
386 |
+
# typically calls to commit are non-blocking when autocommit is
|
387 |
+
# used. However, we need to block on close() to ensure any
|
388 |
+
# awaiting exceptions are handled and that all data is
|
389 |
+
# persisted to disk before returning.
|
390 |
+
self.conn.commit(blocking=True)
|
391 |
+
self.conn.close(force=force)
|
392 |
+
self.conn = None
|
393 |
+
if self.in_temp:
|
394 |
+
try:
|
395 |
+
os.remove(self.filename)
|
396 |
+
except Exception:
|
397 |
+
pass
|
398 |
+
|
399 |
+
def terminate(self):
|
400 |
+
"""Delete the underlying database file. Use with care."""
|
401 |
+
if self.flag == 'r':
|
402 |
+
raise RuntimeError('Refusing to terminate read-only SqliteDict')
|
403 |
+
|
404 |
+
self.close()
|
405 |
+
|
406 |
+
if self.filename == ':memory:':
|
407 |
+
return
|
408 |
+
|
409 |
+
logger.info("deleting %s" % self.filename)
|
410 |
+
try:
|
411 |
+
if os.path.isfile(self.filename):
|
412 |
+
os.remove(self.filename)
|
413 |
+
except (OSError, IOError):
|
414 |
+
logger.exception("failed to delete %s" % (self.filename))
|
415 |
+
|
416 |
+
def __del__(self):
|
417 |
+
# like close(), but assume globals are gone by now (do not log!)
|
418 |
+
try:
|
419 |
+
self.close(do_log=False, force=True)
|
420 |
+
except Exception:
|
421 |
+
# prevent error log flood in case of multiple SqliteDicts
|
422 |
+
# closed after connection lost (exceptions are always ignored
|
423 |
+
# in __del__ method.
|
424 |
+
pass
|
425 |
+
|
426 |
+
|
427 |
+
class SqliteMultithread(threading.Thread):
|
428 |
+
"""
|
429 |
+
Wrap sqlite connection in a way that allows concurrent requests from multiple threads.
|
430 |
+
|
431 |
+
This is done by internally queueing the requests and processing them sequentially
|
432 |
+
in a separate thread (in the same order they arrived).
|
433 |
+
|
434 |
+
"""
|
435 |
+
def __init__(self, filename, autocommit, journal_mode, outer_stack=True):
|
436 |
+
super(SqliteMultithread, self).__init__()
|
437 |
+
self.filename = filename
|
438 |
+
self.autocommit = autocommit
|
439 |
+
self.journal_mode = journal_mode
|
440 |
+
# use request queue of unlimited size
|
441 |
+
self.reqs = Queue()
|
442 |
+
self.daemon = True
|
443 |
+
self._outer_stack = outer_stack
|
444 |
+
self.log = logging.getLogger('sqlitedict.SqliteMultithread')
|
445 |
+
|
446 |
+
#
|
447 |
+
# Parts of this object's state get accessed from different threads, so
|
448 |
+
# we use synchronization to avoid race conditions. For example,
|
449 |
+
# .exception gets set inside the new daemon thread that we spawned, but
|
450 |
+
# gets read from the main thread. This is particularly important
|
451 |
+
# during initialization: the Thread needs some time to actually start
|
452 |
+
# working, and until this happens, any calls to e.g.
|
453 |
+
# check_raise_error() will prematurely return None, meaning all is
|
454 |
+
# well. If the that connection happens to fail, we'll never know about
|
455 |
+
# it, and instead wait for a result that never arrives (effectively,
|
456 |
+
# deadlocking). Locking solves this problem by eliminating the race
|
457 |
+
# condition.
|
458 |
+
#
|
459 |
+
self._lock = threading.Lock()
|
460 |
+
self._lock.acquire()
|
461 |
+
self.exception = None
|
462 |
+
|
463 |
+
self.start()
|
464 |
+
|
465 |
+
def _connect(self):
|
466 |
+
"""Connect to the underlying database.
|
467 |
+
|
468 |
+
Raises an exception on failure. Returns the connection and cursor on success.
|
469 |
+
"""
|
470 |
+
try:
|
471 |
+
if self.autocommit:
|
472 |
+
conn = sqlite3.connect(self.filename, isolation_level=None, check_same_thread=False)
|
473 |
+
else:
|
474 |
+
conn = sqlite3.connect(self.filename, check_same_thread=False)
|
475 |
+
except Exception:
|
476 |
+
self.log.exception("Failed to initialize connection for filename: %s" % self.filename)
|
477 |
+
self.exception = sys.exc_info()
|
478 |
+
raise
|
479 |
+
|
480 |
+
try:
|
481 |
+
conn.execute('PRAGMA journal_mode = %s' % self.journal_mode)
|
482 |
+
conn.text_factory = str
|
483 |
+
cursor = conn.cursor()
|
484 |
+
conn.commit()
|
485 |
+
cursor.execute('PRAGMA synchronous=OFF')
|
486 |
+
except Exception:
|
487 |
+
self.log.exception("Failed to execute PRAGMA statements.")
|
488 |
+
self.exception = sys.exc_info()
|
489 |
+
raise
|
490 |
+
|
491 |
+
return conn, cursor
|
492 |
+
|
493 |
+
def run(self):
|
494 |
+
#
|
495 |
+
# Nb. this is what actually runs inside the new daemon thread.
|
496 |
+
# self._lock is locked at this stage - see the initializer function.
|
497 |
+
#
|
498 |
+
try:
|
499 |
+
conn, cursor = self._connect()
|
500 |
+
finally:
|
501 |
+
self._lock.release()
|
502 |
+
|
503 |
+
res_ref = None
|
504 |
+
while True:
|
505 |
+
#
|
506 |
+
# req: an SQL command or one of the --magic-- commands we use internally
|
507 |
+
# arg: arguments for the command
|
508 |
+
# res_ref: a weak reference to the queue into which responses must be placed
|
509 |
+
# outer_stack: the outer stack, for producing more informative traces in case of error
|
510 |
+
#
|
511 |
+
req, arg, res_ref, outer_stack = self.reqs.get()
|
512 |
+
|
513 |
+
if req == _REQUEST_CLOSE:
|
514 |
+
assert res_ref, ('--close-- without return queue', res_ref)
|
515 |
+
break
|
516 |
+
elif req == _REQUEST_COMMIT:
|
517 |
+
conn.commit()
|
518 |
+
_put(res_ref, _RESPONSE_NO_MORE)
|
519 |
+
else:
|
520 |
+
try:
|
521 |
+
cursor.execute(req, arg)
|
522 |
+
except Exception:
|
523 |
+
with self._lock:
|
524 |
+
self.exception = (e_type, e_value, e_tb) = sys.exc_info()
|
525 |
+
|
526 |
+
inner_stack = traceback.extract_stack()
|
527 |
+
|
528 |
+
# An exception occurred in our thread, but we may not
|
529 |
+
# immediately able to throw it in our calling thread, if it has
|
530 |
+
# no return `res` queue: log as level ERROR both the inner and
|
531 |
+
# outer exception immediately.
|
532 |
+
#
|
533 |
+
# Any iteration of res.get() or any next call will detect the
|
534 |
+
# inner exception and re-raise it in the calling Thread; though
|
535 |
+
# it may be confusing to see an exception for an unrelated
|
536 |
+
# statement, an ERROR log statement from the 'sqlitedict.*'
|
537 |
+
# namespace contains the original outer stack location.
|
538 |
+
self.log.error('Inner exception:')
|
539 |
+
for item in traceback.format_list(inner_stack):
|
540 |
+
self.log.error(item)
|
541 |
+
self.log.error('') # deliniate traceback & exception w/blank line
|
542 |
+
for item in traceback.format_exception_only(e_type, e_value):
|
543 |
+
self.log.error(item)
|
544 |
+
|
545 |
+
self.log.error('') # exception & outer stack w/blank line
|
546 |
+
|
547 |
+
if self._outer_stack:
|
548 |
+
self.log.error('Outer stack:')
|
549 |
+
for item in traceback.format_list(outer_stack):
|
550 |
+
self.log.error(item)
|
551 |
+
self.log.error('Exception will be re-raised at next call.')
|
552 |
+
else:
|
553 |
+
self.log.error(
|
554 |
+
'Unable to show the outer stack. Pass '
|
555 |
+
'outer_stack=True when initializing the '
|
556 |
+
'SqliteDict instance to show the outer stack.'
|
557 |
+
)
|
558 |
+
|
559 |
+
if res_ref:
|
560 |
+
for rec in cursor:
|
561 |
+
if _put(res_ref, rec) == _PUT_REFERENT_DESTROYED:
|
562 |
+
#
|
563 |
+
# The queue we are sending responses to got garbage
|
564 |
+
# collected. Nobody is listening anymore, so we
|
565 |
+
# stop sending responses.
|
566 |
+
#
|
567 |
+
break
|
568 |
+
|
569 |
+
_put(res_ref, _RESPONSE_NO_MORE)
|
570 |
+
|
571 |
+
if self.autocommit:
|
572 |
+
conn.commit()
|
573 |
+
|
574 |
+
self.log.debug('received: %s, send: --no more--', req)
|
575 |
+
conn.close()
|
576 |
+
|
577 |
+
_put(res_ref, _RESPONSE_NO_MORE)
|
578 |
+
|
579 |
+
def check_raise_error(self):
|
580 |
+
"""
|
581 |
+
Check for and raise exception for any previous sqlite query.
|
582 |
+
|
583 |
+
For the `execute*` family of method calls, such calls are non-blocking and any
|
584 |
+
exception raised in the thread cannot be handled by the calling Thread (usually
|
585 |
+
MainThread). This method is called on `close`, and prior to any subsequent
|
586 |
+
calls to the `execute*` methods to check for and raise an exception in a
|
587 |
+
previous call to the MainThread.
|
588 |
+
"""
|
589 |
+
with self._lock:
|
590 |
+
if self.exception:
|
591 |
+
e_type, e_value, e_tb = self.exception
|
592 |
+
|
593 |
+
# clear self.exception, if the caller decides to handle such
|
594 |
+
# exception, we should not repeatedly re-raise it.
|
595 |
+
self.exception = None
|
596 |
+
|
597 |
+
self.log.error('An exception occurred from a previous statement, view '
|
598 |
+
'the logging namespace "sqlitedict" for outer stack.')
|
599 |
+
|
600 |
+
# The third argument to raise is the traceback object, and it is
|
601 |
+
# substituted instead of the current location as the place where
|
602 |
+
# the exception occurred, this is so that when using debuggers such
|
603 |
+
# as `pdb', or simply evaluating the naturally raised traceback, we
|
604 |
+
# retain the original (inner) location of where the exception
|
605 |
+
# occurred.
|
606 |
+
reraise(e_type, e_value, e_tb)
|
607 |
+
|
608 |
+
def execute(self, req, arg=None, res=None):
|
609 |
+
"""
|
610 |
+
`execute` calls are non-blocking: just queue up the request and return immediately.
|
611 |
+
|
612 |
+
:param req: The request (an SQL command)
|
613 |
+
:param arg: Arguments to the SQL command
|
614 |
+
:param res: A queue in which to place responses as they become available
|
615 |
+
"""
|
616 |
+
self.check_raise_error()
|
617 |
+
stack = None
|
618 |
+
|
619 |
+
if self._outer_stack:
|
620 |
+
# NOTE: This might be a lot of information to pump into an input
|
621 |
+
# queue, affecting performance. I've also seen earlier versions of
|
622 |
+
# jython take a severe performance impact for throwing exceptions
|
623 |
+
# so often.
|
624 |
+
stack = traceback.extract_stack()[:-1]
|
625 |
+
|
626 |
+
#
|
627 |
+
# We pass a weak reference to the response queue instead of a regular
|
628 |
+
# reference, because we want the queues to be garbage-collected
|
629 |
+
# more aggressively.
|
630 |
+
#
|
631 |
+
res_ref = None
|
632 |
+
if res:
|
633 |
+
res_ref = weakref.ref(res)
|
634 |
+
|
635 |
+
self.reqs.put((req, arg or tuple(), res_ref, stack))
|
636 |
+
|
637 |
+
def executemany(self, req, items):
|
638 |
+
for item in items:
|
639 |
+
self.execute(req, item)
|
640 |
+
self.check_raise_error()
|
641 |
+
|
642 |
+
def select(self, req, arg=None):
|
643 |
+
"""
|
644 |
+
Unlike sqlite's native select, this select doesn't handle iteration efficiently.
|
645 |
+
|
646 |
+
The result of `select` starts filling up with values as soon as the
|
647 |
+
request is dequeued, and although you can iterate over the result normally
|
648 |
+
(`for res in self.select(): ...`), the entire result will be in memory.
|
649 |
+
"""
|
650 |
+
res = Queue() # results of the select will appear as items in this queue
|
651 |
+
self.execute(req, arg, res)
|
652 |
+
while True:
|
653 |
+
rec = res.get()
|
654 |
+
self.check_raise_error()
|
655 |
+
if rec == _RESPONSE_NO_MORE:
|
656 |
+
break
|
657 |
+
yield rec
|
658 |
+
|
659 |
+
def select_one(self, req, arg=None):
|
660 |
+
"""Return only the first row of the SELECT, or None if there are no matching rows."""
|
661 |
+
try:
|
662 |
+
return next(iter(self.select(req, arg)))
|
663 |
+
except StopIteration:
|
664 |
+
return None
|
665 |
+
|
666 |
+
def commit(self, blocking=True):
|
667 |
+
if blocking:
|
668 |
+
# by default, we await completion of commit() unless
|
669 |
+
# blocking=False. This ensures any available exceptions for any
|
670 |
+
# previous statement are thrown before returning, and that the
|
671 |
+
# data has actually persisted to disk!
|
672 |
+
self.select_one(_REQUEST_COMMIT)
|
673 |
+
else:
|
674 |
+
# otherwise, we fire and forget as usual.
|
675 |
+
self.execute(_REQUEST_COMMIT)
|
676 |
+
|
677 |
+
def close(self, force=False):
|
678 |
+
if force:
|
679 |
+
# If a SqliteDict is being killed or garbage-collected, then select_one()
|
680 |
+
# could hang forever because run() might already have exited and therefore
|
681 |
+
# can't process the request. Instead, push the close command to the requests
|
682 |
+
# queue directly. If run() is still alive, it will exit gracefully. If not,
|
683 |
+
# then there's nothing we can do anyway.
|
684 |
+
self.reqs.put((_REQUEST_CLOSE, None, weakref.ref(Queue()), None))
|
685 |
+
else:
|
686 |
+
# we abuse 'select' to "iter" over a "--close--" statement so that we
|
687 |
+
# can confirm the completion of close before joining the thread and
|
688 |
+
# returning (by semaphore '--no more--'
|
689 |
+
self.select_one(_REQUEST_CLOSE)
|
690 |
+
self.join()
|
691 |
+
|
692 |
+
|
693 |
+
#
|
694 |
+
# This is here for .github/workflows/release.yml
|
695 |
+
#
|
696 |
+
if __name__ == '__main__':
|
697 |
+
print(__version__)
|
llmeval-env/lib/python3.10/site-packages/threadpoolctl.py
ADDED
@@ -0,0 +1,1280 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""threadpoolctl
|
2 |
+
|
3 |
+
This module provides utilities to introspect native libraries that relies on
|
4 |
+
thread pools (notably BLAS and OpenMP implementations) and dynamically set the
|
5 |
+
maximal number of threads they can use.
|
6 |
+
"""
|
7 |
+
|
8 |
+
# License: BSD 3-Clause
|
9 |
+
|
10 |
+
# The code to introspect dynamically loaded libraries on POSIX systems is
|
11 |
+
# adapted from code by Intel developer @anton-malakhov available at
|
12 |
+
# https://github.com/IntelPython/smp (Copyright (c) 2017, Intel Corporation)
|
13 |
+
# and also published under the BSD 3-Clause license
|
14 |
+
import os
|
15 |
+
import re
|
16 |
+
import sys
|
17 |
+
import ctypes
|
18 |
+
import itertools
|
19 |
+
import textwrap
|
20 |
+
from typing import final
|
21 |
+
import warnings
|
22 |
+
from ctypes.util import find_library
|
23 |
+
from abc import ABC, abstractmethod
|
24 |
+
from functools import lru_cache
|
25 |
+
from contextlib import ContextDecorator
|
26 |
+
|
27 |
+
__version__ = "3.5.0"
|
28 |
+
__all__ = [
|
29 |
+
"threadpool_limits",
|
30 |
+
"threadpool_info",
|
31 |
+
"ThreadpoolController",
|
32 |
+
"LibController",
|
33 |
+
"register",
|
34 |
+
]
|
35 |
+
|
36 |
+
|
37 |
+
# One can get runtime errors or even segfaults due to multiple OpenMP libraries
|
38 |
+
# loaded simultaneously which can happen easily in Python when importing and
|
39 |
+
# using compiled extensions built with different compilers and therefore
|
40 |
+
# different OpenMP runtimes in the same program. In particular libiomp (used by
|
41 |
+
# Intel ICC) and libomp used by clang/llvm tend to crash. This can happen for
|
42 |
+
# instance when calling BLAS inside a prange. Setting the following environment
|
43 |
+
# variable allows multiple OpenMP libraries to be loaded. It should not degrade
|
44 |
+
# performances since we manually take care of potential over-subscription
|
45 |
+
# performance issues, in sections of the code where nested OpenMP loops can
|
46 |
+
# happen, by dynamically reconfiguring the inner OpenMP runtime to temporarily
|
47 |
+
# disable it while under the scope of the outer OpenMP parallel section.
|
48 |
+
os.environ.setdefault("KMP_DUPLICATE_LIB_OK", "True")
|
49 |
+
|
50 |
+
# Structure to cast the info on dynamically loaded library. See
|
51 |
+
# https://linux.die.net/man/3/dl_iterate_phdr for more details.
|
52 |
+
_SYSTEM_UINT = ctypes.c_uint64 if sys.maxsize > 2**32 else ctypes.c_uint32
|
53 |
+
_SYSTEM_UINT_HALF = ctypes.c_uint32 if sys.maxsize > 2**32 else ctypes.c_uint16
|
54 |
+
|
55 |
+
|
56 |
+
class _dl_phdr_info(ctypes.Structure):
|
57 |
+
_fields_ = [
|
58 |
+
("dlpi_addr", _SYSTEM_UINT), # Base address of object
|
59 |
+
("dlpi_name", ctypes.c_char_p), # path to the library
|
60 |
+
("dlpi_phdr", ctypes.c_void_p), # pointer on dlpi_headers
|
61 |
+
("dlpi_phnum", _SYSTEM_UINT_HALF), # number of elements in dlpi_phdr
|
62 |
+
]
|
63 |
+
|
64 |
+
|
65 |
+
# The RTLD_NOLOAD flag for loading shared libraries is not defined on Windows.
|
66 |
+
try:
|
67 |
+
_RTLD_NOLOAD = os.RTLD_NOLOAD
|
68 |
+
except AttributeError:
|
69 |
+
_RTLD_NOLOAD = ctypes.DEFAULT_MODE
|
70 |
+
|
71 |
+
|
72 |
+
class LibController(ABC):
|
73 |
+
"""Abstract base class for the individual library controllers
|
74 |
+
|
75 |
+
A library controller must expose the following class attributes:
|
76 |
+
- user_api : str
|
77 |
+
Usually the name of the library or generic specification the library
|
78 |
+
implements, e.g. "blas" is a specification with different implementations.
|
79 |
+
- internal_api : str
|
80 |
+
Usually the name of the library or concrete implementation of some
|
81 |
+
specification, e.g. "openblas" is an implementation of the "blas"
|
82 |
+
specification.
|
83 |
+
- filename_prefixes : tuple
|
84 |
+
Possible prefixes of the shared library's filename that allow to
|
85 |
+
identify the library. e.g. "libopenblas" for libopenblas.so.
|
86 |
+
|
87 |
+
and implement the following methods: `get_num_threads`, `set_num_threads` and
|
88 |
+
`get_version`.
|
89 |
+
|
90 |
+
Threadpoolctl loops through all the loaded shared libraries and tries to match
|
91 |
+
the filename of each library with the `filename_prefixes`. If a match is found, a
|
92 |
+
controller is instantiated and a handler to the library is stored in the `dynlib`
|
93 |
+
attribute as a `ctypes.CDLL` object. It can be used to access the necessary symbols
|
94 |
+
of the shared library to implement the above methods.
|
95 |
+
|
96 |
+
The following information will be exposed in the info dictionary:
|
97 |
+
- user_api : standardized API, if any, or a copy of internal_api.
|
98 |
+
- internal_api : implementation-specific API.
|
99 |
+
- num_threads : the current thread limit.
|
100 |
+
- prefix : prefix of the shared library's filename.
|
101 |
+
- filepath : path to the loaded shared library.
|
102 |
+
- version : version of the library (if available).
|
103 |
+
|
104 |
+
In addition, each library controller may expose internal API specific entries. They
|
105 |
+
must be set as attributes in the `set_additional_attributes` method.
|
106 |
+
"""
|
107 |
+
|
108 |
+
@final
|
109 |
+
def __init__(self, *, filepath=None, prefix=None, parent=None):
|
110 |
+
"""This is not meant to be overriden by subclasses."""
|
111 |
+
self.parent = parent
|
112 |
+
self.prefix = prefix
|
113 |
+
self.filepath = filepath
|
114 |
+
self.dynlib = ctypes.CDLL(filepath, mode=_RTLD_NOLOAD)
|
115 |
+
self._symbol_prefix, self._symbol_suffix = self._find_affixes()
|
116 |
+
self.version = self.get_version()
|
117 |
+
self.set_additional_attributes()
|
118 |
+
|
119 |
+
def info(self):
|
120 |
+
"""Return relevant info wrapped in a dict"""
|
121 |
+
hidden_attrs = ("dynlib", "parent", "_symbol_prefix", "_symbol_suffix")
|
122 |
+
return {
|
123 |
+
"user_api": self.user_api,
|
124 |
+
"internal_api": self.internal_api,
|
125 |
+
"num_threads": self.num_threads,
|
126 |
+
**{k: v for k, v in vars(self).items() if k not in hidden_attrs},
|
127 |
+
}
|
128 |
+
|
129 |
+
def set_additional_attributes(self):
|
130 |
+
"""Set additional attributes meant to be exposed in the info dict"""
|
131 |
+
|
132 |
+
@property
|
133 |
+
def num_threads(self):
|
134 |
+
"""Exposes the current thread limit as a dynamic property
|
135 |
+
|
136 |
+
This is not meant to be used or overriden by subclasses.
|
137 |
+
"""
|
138 |
+
return self.get_num_threads()
|
139 |
+
|
140 |
+
@abstractmethod
|
141 |
+
def get_num_threads(self):
|
142 |
+
"""Return the maximum number of threads available to use"""
|
143 |
+
|
144 |
+
@abstractmethod
|
145 |
+
def set_num_threads(self, num_threads):
|
146 |
+
"""Set the maximum number of threads to use"""
|
147 |
+
|
148 |
+
@abstractmethod
|
149 |
+
def get_version(self):
|
150 |
+
"""Return the version of the shared library"""
|
151 |
+
|
152 |
+
def _find_affixes(self):
|
153 |
+
"""Return the affixes for the symbols of the shared library"""
|
154 |
+
return "", ""
|
155 |
+
|
156 |
+
def _get_symbol(self, name):
|
157 |
+
"""Return the symbol of the shared library accounding for the affixes"""
|
158 |
+
return getattr(
|
159 |
+
self.dynlib, f"{self._symbol_prefix}{name}{self._symbol_suffix}", None
|
160 |
+
)
|
161 |
+
|
162 |
+
|
163 |
+
class OpenBLASController(LibController):
|
164 |
+
"""Controller class for OpenBLAS"""
|
165 |
+
|
166 |
+
user_api = "blas"
|
167 |
+
internal_api = "openblas"
|
168 |
+
filename_prefixes = ("libopenblas", "libblas", "libscipy_openblas")
|
169 |
+
|
170 |
+
_symbol_prefixes = ("", "scipy_")
|
171 |
+
_symbol_suffixes = ("", "64_", "_64")
|
172 |
+
|
173 |
+
# All variations of "openblas_get_num_threads", accounting for the affixes
|
174 |
+
check_symbols = tuple(
|
175 |
+
f"{prefix}openblas_get_num_threads{suffix}"
|
176 |
+
for prefix, suffix in itertools.product(_symbol_prefixes, _symbol_suffixes)
|
177 |
+
)
|
178 |
+
|
179 |
+
def _find_affixes(self):
|
180 |
+
for prefix, suffix in itertools.product(
|
181 |
+
self._symbol_prefixes, self._symbol_suffixes
|
182 |
+
):
|
183 |
+
if hasattr(self.dynlib, f"{prefix}openblas_get_num_threads{suffix}"):
|
184 |
+
return prefix, suffix
|
185 |
+
|
186 |
+
def set_additional_attributes(self):
|
187 |
+
self.threading_layer = self._get_threading_layer()
|
188 |
+
self.architecture = self._get_architecture()
|
189 |
+
|
190 |
+
def get_num_threads(self):
|
191 |
+
get_num_threads_func = self._get_symbol("openblas_get_num_threads")
|
192 |
+
if get_num_threads_func is not None:
|
193 |
+
return get_num_threads_func()
|
194 |
+
return None
|
195 |
+
|
196 |
+
def set_num_threads(self, num_threads):
|
197 |
+
set_num_threads_func = self._get_symbol("openblas_set_num_threads")
|
198 |
+
if set_num_threads_func is not None:
|
199 |
+
return set_num_threads_func(num_threads)
|
200 |
+
return None
|
201 |
+
|
202 |
+
def get_version(self):
|
203 |
+
# None means OpenBLAS is not loaded or version < 0.3.4, since OpenBLAS
|
204 |
+
# did not expose its version before that.
|
205 |
+
get_version_func = self._get_symbol("openblas_get_config")
|
206 |
+
if get_version_func is not None:
|
207 |
+
get_version_func.restype = ctypes.c_char_p
|
208 |
+
config = get_version_func().split()
|
209 |
+
if config[0] == b"OpenBLAS":
|
210 |
+
return config[1].decode("utf-8")
|
211 |
+
return None
|
212 |
+
return None
|
213 |
+
|
214 |
+
def _get_threading_layer(self):
|
215 |
+
"""Return the threading layer of OpenBLAS"""
|
216 |
+
get_threading_layer_func = self._get_symbol("openblas_get_parallel")
|
217 |
+
if get_threading_layer_func is not None:
|
218 |
+
threading_layer = get_threading_layer_func()
|
219 |
+
if threading_layer == 2:
|
220 |
+
return "openmp"
|
221 |
+
elif threading_layer == 1:
|
222 |
+
return "pthreads"
|
223 |
+
return "disabled"
|
224 |
+
return "unknown"
|
225 |
+
|
226 |
+
def _get_architecture(self):
|
227 |
+
"""Return the architecture detected by OpenBLAS"""
|
228 |
+
get_architecture_func = self._get_symbol("openblas_get_corename")
|
229 |
+
if get_architecture_func is not None:
|
230 |
+
get_architecture_func.restype = ctypes.c_char_p
|
231 |
+
return get_architecture_func().decode("utf-8")
|
232 |
+
return None
|
233 |
+
|
234 |
+
|
235 |
+
class BLISController(LibController):
|
236 |
+
"""Controller class for BLIS"""
|
237 |
+
|
238 |
+
user_api = "blas"
|
239 |
+
internal_api = "blis"
|
240 |
+
filename_prefixes = ("libblis", "libblas")
|
241 |
+
check_symbols = (
|
242 |
+
"bli_thread_get_num_threads",
|
243 |
+
"bli_thread_set_num_threads",
|
244 |
+
"bli_info_get_version_str",
|
245 |
+
"bli_info_get_enable_openmp",
|
246 |
+
"bli_info_get_enable_pthreads",
|
247 |
+
"bli_arch_query_id",
|
248 |
+
"bli_arch_string",
|
249 |
+
)
|
250 |
+
|
251 |
+
def set_additional_attributes(self):
|
252 |
+
self.threading_layer = self._get_threading_layer()
|
253 |
+
self.architecture = self._get_architecture()
|
254 |
+
|
255 |
+
def get_num_threads(self):
|
256 |
+
get_func = getattr(self.dynlib, "bli_thread_get_num_threads", lambda: None)
|
257 |
+
num_threads = get_func()
|
258 |
+
# by default BLIS is single-threaded and get_num_threads
|
259 |
+
# returns -1. We map it to 1 for consistency with other libraries.
|
260 |
+
return 1 if num_threads == -1 else num_threads
|
261 |
+
|
262 |
+
def set_num_threads(self, num_threads):
|
263 |
+
set_func = getattr(
|
264 |
+
self.dynlib, "bli_thread_set_num_threads", lambda num_threads: None
|
265 |
+
)
|
266 |
+
return set_func(num_threads)
|
267 |
+
|
268 |
+
def get_version(self):
|
269 |
+
get_version_ = getattr(self.dynlib, "bli_info_get_version_str", None)
|
270 |
+
if get_version_ is None:
|
271 |
+
return None
|
272 |
+
|
273 |
+
get_version_.restype = ctypes.c_char_p
|
274 |
+
return get_version_().decode("utf-8")
|
275 |
+
|
276 |
+
def _get_threading_layer(self):
|
277 |
+
"""Return the threading layer of BLIS"""
|
278 |
+
if getattr(self.dynlib, "bli_info_get_enable_openmp", lambda: False)():
|
279 |
+
return "openmp"
|
280 |
+
elif getattr(self.dynlib, "bli_info_get_enable_pthreads", lambda: False)():
|
281 |
+
return "pthreads"
|
282 |
+
return "disabled"
|
283 |
+
|
284 |
+
def _get_architecture(self):
|
285 |
+
"""Return the architecture detected by BLIS"""
|
286 |
+
bli_arch_query_id = getattr(self.dynlib, "bli_arch_query_id", None)
|
287 |
+
bli_arch_string = getattr(self.dynlib, "bli_arch_string", None)
|
288 |
+
if bli_arch_query_id is None or bli_arch_string is None:
|
289 |
+
return None
|
290 |
+
|
291 |
+
# the true restype should be BLIS' arch_t (enum) but int should work
|
292 |
+
# for us:
|
293 |
+
bli_arch_query_id.restype = ctypes.c_int
|
294 |
+
bli_arch_string.restype = ctypes.c_char_p
|
295 |
+
return bli_arch_string(bli_arch_query_id()).decode("utf-8")
|
296 |
+
|
297 |
+
|
298 |
+
class FlexiBLASController(LibController):
|
299 |
+
"""Controller class for FlexiBLAS"""
|
300 |
+
|
301 |
+
user_api = "blas"
|
302 |
+
internal_api = "flexiblas"
|
303 |
+
filename_prefixes = ("libflexiblas",)
|
304 |
+
check_symbols = (
|
305 |
+
"flexiblas_get_num_threads",
|
306 |
+
"flexiblas_set_num_threads",
|
307 |
+
"flexiblas_get_version",
|
308 |
+
"flexiblas_list",
|
309 |
+
"flexiblas_list_loaded",
|
310 |
+
"flexiblas_current_backend",
|
311 |
+
)
|
312 |
+
|
313 |
+
@property
|
314 |
+
def loaded_backends(self):
|
315 |
+
return self._get_backend_list(loaded=True)
|
316 |
+
|
317 |
+
@property
|
318 |
+
def current_backend(self):
|
319 |
+
return self._get_current_backend()
|
320 |
+
|
321 |
+
def info(self):
|
322 |
+
"""Return relevant info wrapped in a dict"""
|
323 |
+
# We override the info method because the loaded and current backends
|
324 |
+
# are dynamic properties
|
325 |
+
exposed_attrs = super().info()
|
326 |
+
exposed_attrs["loaded_backends"] = self.loaded_backends
|
327 |
+
exposed_attrs["current_backend"] = self.current_backend
|
328 |
+
|
329 |
+
return exposed_attrs
|
330 |
+
|
331 |
+
def set_additional_attributes(self):
|
332 |
+
self.available_backends = self._get_backend_list(loaded=False)
|
333 |
+
|
334 |
+
def get_num_threads(self):
|
335 |
+
get_func = getattr(self.dynlib, "flexiblas_get_num_threads", lambda: None)
|
336 |
+
num_threads = get_func()
|
337 |
+
# by default BLIS is single-threaded and get_num_threads
|
338 |
+
# returns -1. We map it to 1 for consistency with other libraries.
|
339 |
+
return 1 if num_threads == -1 else num_threads
|
340 |
+
|
341 |
+
def set_num_threads(self, num_threads):
|
342 |
+
set_func = getattr(
|
343 |
+
self.dynlib, "flexiblas_set_num_threads", lambda num_threads: None
|
344 |
+
)
|
345 |
+
return set_func(num_threads)
|
346 |
+
|
347 |
+
def get_version(self):
|
348 |
+
get_version_ = getattr(self.dynlib, "flexiblas_get_version", None)
|
349 |
+
if get_version_ is None:
|
350 |
+
return None
|
351 |
+
|
352 |
+
major = ctypes.c_int()
|
353 |
+
minor = ctypes.c_int()
|
354 |
+
patch = ctypes.c_int()
|
355 |
+
get_version_(ctypes.byref(major), ctypes.byref(minor), ctypes.byref(patch))
|
356 |
+
return f"{major.value}.{minor.value}.{patch.value}"
|
357 |
+
|
358 |
+
def _get_backend_list(self, loaded=False):
|
359 |
+
"""Return the list of available backends for FlexiBLAS.
|
360 |
+
|
361 |
+
If loaded is False, return the list of available backends from the FlexiBLAS
|
362 |
+
configuration. If loaded is True, return the list of actually loaded backends.
|
363 |
+
"""
|
364 |
+
func_name = f"flexiblas_list{'_loaded' if loaded else ''}"
|
365 |
+
get_backend_list_ = getattr(self.dynlib, func_name, None)
|
366 |
+
if get_backend_list_ is None:
|
367 |
+
return None
|
368 |
+
|
369 |
+
n_backends = get_backend_list_(None, 0, 0)
|
370 |
+
|
371 |
+
backends = []
|
372 |
+
for i in range(n_backends):
|
373 |
+
backend_name = ctypes.create_string_buffer(1024)
|
374 |
+
get_backend_list_(backend_name, 1024, i)
|
375 |
+
if backend_name.value.decode("utf-8") != "__FALLBACK__":
|
376 |
+
# We don't know when to expect __FALLBACK__ but it is not a real
|
377 |
+
# backend and does not show up when running flexiblas list.
|
378 |
+
backends.append(backend_name.value.decode("utf-8"))
|
379 |
+
return backends
|
380 |
+
|
381 |
+
def _get_current_backend(self):
|
382 |
+
"""Return the backend of FlexiBLAS"""
|
383 |
+
get_backend_ = getattr(self.dynlib, "flexiblas_current_backend", None)
|
384 |
+
if get_backend_ is None:
|
385 |
+
return None
|
386 |
+
|
387 |
+
backend = ctypes.create_string_buffer(1024)
|
388 |
+
get_backend_(backend, ctypes.sizeof(backend))
|
389 |
+
return backend.value.decode("utf-8")
|
390 |
+
|
391 |
+
def switch_backend(self, backend):
|
392 |
+
"""Switch the backend of FlexiBLAS
|
393 |
+
|
394 |
+
Parameters
|
395 |
+
----------
|
396 |
+
backend : str
|
397 |
+
The name or the path to the shared library of the backend to switch to. If
|
398 |
+
the backend is not already loaded, it will be loaded first.
|
399 |
+
"""
|
400 |
+
if backend not in self.loaded_backends:
|
401 |
+
if backend in self.available_backends:
|
402 |
+
load_func = getattr(self.dynlib, "flexiblas_load_backend", lambda _: -1)
|
403 |
+
else: # assume backend is a path to a shared library
|
404 |
+
load_func = getattr(
|
405 |
+
self.dynlib, "flexiblas_load_backend_library", lambda _: -1
|
406 |
+
)
|
407 |
+
res = load_func(str(backend).encode("utf-8"))
|
408 |
+
if res == -1:
|
409 |
+
raise RuntimeError(
|
410 |
+
f"Failed to load backend {backend!r}. It must either be the name of"
|
411 |
+
" a backend available in the FlexiBLAS configuration "
|
412 |
+
f"{self.available_backends} or the path to a valid shared library."
|
413 |
+
)
|
414 |
+
|
415 |
+
# Trigger a new search of loaded shared libraries since loading a new
|
416 |
+
# backend caused a dlopen.
|
417 |
+
self.parent._load_libraries()
|
418 |
+
|
419 |
+
switch_func = getattr(self.dynlib, "flexiblas_switch", lambda _: -1)
|
420 |
+
idx = self.loaded_backends.index(backend)
|
421 |
+
res = switch_func(idx)
|
422 |
+
if res == -1:
|
423 |
+
raise RuntimeError(f"Failed to switch to backend {backend!r}.")
|
424 |
+
|
425 |
+
|
426 |
+
class MKLController(LibController):
|
427 |
+
"""Controller class for MKL"""
|
428 |
+
|
429 |
+
user_api = "blas"
|
430 |
+
internal_api = "mkl"
|
431 |
+
filename_prefixes = ("libmkl_rt", "mkl_rt", "libblas")
|
432 |
+
check_symbols = (
|
433 |
+
"MKL_Get_Max_Threads",
|
434 |
+
"MKL_Set_Num_Threads",
|
435 |
+
"MKL_Get_Version_String",
|
436 |
+
"MKL_Set_Threading_Layer",
|
437 |
+
)
|
438 |
+
|
439 |
+
def set_additional_attributes(self):
|
440 |
+
self.threading_layer = self._get_threading_layer()
|
441 |
+
|
442 |
+
def get_num_threads(self):
|
443 |
+
get_func = getattr(self.dynlib, "MKL_Get_Max_Threads", lambda: None)
|
444 |
+
return get_func()
|
445 |
+
|
446 |
+
def set_num_threads(self, num_threads):
|
447 |
+
set_func = getattr(self.dynlib, "MKL_Set_Num_Threads", lambda num_threads: None)
|
448 |
+
return set_func(num_threads)
|
449 |
+
|
450 |
+
def get_version(self):
|
451 |
+
if not hasattr(self.dynlib, "MKL_Get_Version_String"):
|
452 |
+
return None
|
453 |
+
|
454 |
+
res = ctypes.create_string_buffer(200)
|
455 |
+
self.dynlib.MKL_Get_Version_String(res, 200)
|
456 |
+
|
457 |
+
version = res.value.decode("utf-8")
|
458 |
+
group = re.search(r"Version ([^ ]+) ", version)
|
459 |
+
if group is not None:
|
460 |
+
version = group.groups()[0]
|
461 |
+
return version.strip()
|
462 |
+
|
463 |
+
def _get_threading_layer(self):
|
464 |
+
"""Return the threading layer of MKL"""
|
465 |
+
# The function mkl_set_threading_layer returns the current threading
|
466 |
+
# layer. Calling it with an invalid threading layer allows us to safely
|
467 |
+
# get the threading layer
|
468 |
+
set_threading_layer = getattr(
|
469 |
+
self.dynlib, "MKL_Set_Threading_Layer", lambda layer: -1
|
470 |
+
)
|
471 |
+
layer_map = {
|
472 |
+
0: "intel",
|
473 |
+
1: "sequential",
|
474 |
+
2: "pgi",
|
475 |
+
3: "gnu",
|
476 |
+
4: "tbb",
|
477 |
+
-1: "not specified",
|
478 |
+
}
|
479 |
+
return layer_map[set_threading_layer(-1)]
|
480 |
+
|
481 |
+
|
482 |
+
class OpenMPController(LibController):
|
483 |
+
"""Controller class for OpenMP"""
|
484 |
+
|
485 |
+
user_api = "openmp"
|
486 |
+
internal_api = "openmp"
|
487 |
+
filename_prefixes = ("libiomp", "libgomp", "libomp", "vcomp")
|
488 |
+
check_symbols = (
|
489 |
+
"omp_get_max_threads",
|
490 |
+
"omp_get_num_threads",
|
491 |
+
)
|
492 |
+
|
493 |
+
def get_num_threads(self):
|
494 |
+
get_func = getattr(self.dynlib, "omp_get_max_threads", lambda: None)
|
495 |
+
return get_func()
|
496 |
+
|
497 |
+
def set_num_threads(self, num_threads):
|
498 |
+
set_func = getattr(self.dynlib, "omp_set_num_threads", lambda num_threads: None)
|
499 |
+
return set_func(num_threads)
|
500 |
+
|
501 |
+
def get_version(self):
|
502 |
+
# There is no way to get the version number programmatically in OpenMP.
|
503 |
+
return None
|
504 |
+
|
505 |
+
|
506 |
+
# Controllers for the libraries that we'll look for in the loaded libraries.
|
507 |
+
# Third party libraries can register their own controllers.
|
508 |
+
_ALL_CONTROLLERS = [
|
509 |
+
OpenBLASController,
|
510 |
+
BLISController,
|
511 |
+
MKLController,
|
512 |
+
OpenMPController,
|
513 |
+
FlexiBLASController,
|
514 |
+
]
|
515 |
+
|
516 |
+
# Helpers for the doc and test names
|
517 |
+
_ALL_USER_APIS = list(set(lib.user_api for lib in _ALL_CONTROLLERS))
|
518 |
+
_ALL_INTERNAL_APIS = [lib.internal_api for lib in _ALL_CONTROLLERS]
|
519 |
+
_ALL_PREFIXES = list(
|
520 |
+
set(prefix for lib in _ALL_CONTROLLERS for prefix in lib.filename_prefixes)
|
521 |
+
)
|
522 |
+
_ALL_BLAS_LIBRARIES = [
|
523 |
+
lib.internal_api for lib in _ALL_CONTROLLERS if lib.user_api == "blas"
|
524 |
+
]
|
525 |
+
_ALL_OPENMP_LIBRARIES = OpenMPController.filename_prefixes
|
526 |
+
|
527 |
+
|
528 |
+
def register(controller):
|
529 |
+
"""Register a new controller"""
|
530 |
+
_ALL_CONTROLLERS.append(controller)
|
531 |
+
_ALL_USER_APIS.append(controller.user_api)
|
532 |
+
_ALL_INTERNAL_APIS.append(controller.internal_api)
|
533 |
+
_ALL_PREFIXES.extend(controller.filename_prefixes)
|
534 |
+
|
535 |
+
|
536 |
+
def _format_docstring(*args, **kwargs):
|
537 |
+
def decorator(o):
|
538 |
+
if o.__doc__ is not None:
|
539 |
+
o.__doc__ = o.__doc__.format(*args, **kwargs)
|
540 |
+
return o
|
541 |
+
|
542 |
+
return decorator
|
543 |
+
|
544 |
+
|
545 |
+
@lru_cache(maxsize=10000)
|
546 |
+
def _realpath(filepath):
|
547 |
+
"""Small caching wrapper around os.path.realpath to limit system calls"""
|
548 |
+
return os.path.realpath(filepath)
|
549 |
+
|
550 |
+
|
551 |
+
@_format_docstring(USER_APIS=list(_ALL_USER_APIS), INTERNAL_APIS=_ALL_INTERNAL_APIS)
|
552 |
+
def threadpool_info():
|
553 |
+
"""Return the maximal number of threads for each detected library.
|
554 |
+
|
555 |
+
Return a list with all the supported libraries that have been found. Each
|
556 |
+
library is represented by a dict with the following information:
|
557 |
+
|
558 |
+
- "user_api" : user API. Possible values are {USER_APIS}.
|
559 |
+
- "internal_api": internal API. Possible values are {INTERNAL_APIS}.
|
560 |
+
- "prefix" : filename prefix of the specific implementation.
|
561 |
+
- "filepath": path to the loaded library.
|
562 |
+
- "version": version of the library (if available).
|
563 |
+
- "num_threads": the current thread limit.
|
564 |
+
|
565 |
+
In addition, each library may contain internal_api specific entries.
|
566 |
+
"""
|
567 |
+
return ThreadpoolController().info()
|
568 |
+
|
569 |
+
|
570 |
+
class _ThreadpoolLimiter:
|
571 |
+
"""The guts of ThreadpoolController.limit
|
572 |
+
|
573 |
+
Refer to the docstring of ThreadpoolController.limit for more details.
|
574 |
+
|
575 |
+
It will only act on the library controllers held by the provided `controller`.
|
576 |
+
Using the default constructor sets the limits right away such that it can be used as
|
577 |
+
a callable. Setting the limits can be delayed by using the `wrap` class method such
|
578 |
+
that it can be used as a decorator.
|
579 |
+
"""
|
580 |
+
|
581 |
+
def __init__(self, controller, *, limits=None, user_api=None):
|
582 |
+
self._controller = controller
|
583 |
+
self._limits, self._user_api, self._prefixes = self._check_params(
|
584 |
+
limits, user_api
|
585 |
+
)
|
586 |
+
self._original_info = self._controller.info()
|
587 |
+
self._set_threadpool_limits()
|
588 |
+
|
589 |
+
def __enter__(self):
|
590 |
+
return self
|
591 |
+
|
592 |
+
def __exit__(self, type, value, traceback):
|
593 |
+
self.restore_original_limits()
|
594 |
+
|
595 |
+
@classmethod
|
596 |
+
def wrap(cls, controller, *, limits=None, user_api=None):
|
597 |
+
"""Return an instance of this class that can be used as a decorator"""
|
598 |
+
return _ThreadpoolLimiterDecorator(
|
599 |
+
controller=controller, limits=limits, user_api=user_api
|
600 |
+
)
|
601 |
+
|
602 |
+
def restore_original_limits(self):
|
603 |
+
"""Set the limits back to their original values"""
|
604 |
+
for lib_controller, original_info in zip(
|
605 |
+
self._controller.lib_controllers, self._original_info
|
606 |
+
):
|
607 |
+
lib_controller.set_num_threads(original_info["num_threads"])
|
608 |
+
|
609 |
+
# Alias of `restore_original_limits` for backward compatibility
|
610 |
+
unregister = restore_original_limits
|
611 |
+
|
612 |
+
def get_original_num_threads(self):
|
613 |
+
"""Original num_threads from before calling threadpool_limits
|
614 |
+
|
615 |
+
Return a dict `{user_api: num_threads}`.
|
616 |
+
"""
|
617 |
+
num_threads = {}
|
618 |
+
warning_apis = []
|
619 |
+
|
620 |
+
for user_api in self._user_api:
|
621 |
+
limits = [
|
622 |
+
lib_info["num_threads"]
|
623 |
+
for lib_info in self._original_info
|
624 |
+
if lib_info["user_api"] == user_api
|
625 |
+
]
|
626 |
+
limits = set(limits)
|
627 |
+
n_limits = len(limits)
|
628 |
+
|
629 |
+
if n_limits == 1:
|
630 |
+
limit = limits.pop()
|
631 |
+
elif n_limits == 0:
|
632 |
+
limit = None
|
633 |
+
else:
|
634 |
+
limit = min(limits)
|
635 |
+
warning_apis.append(user_api)
|
636 |
+
|
637 |
+
num_threads[user_api] = limit
|
638 |
+
|
639 |
+
if warning_apis:
|
640 |
+
warnings.warn(
|
641 |
+
"Multiple value possible for following user apis: "
|
642 |
+
+ ", ".join(warning_apis)
|
643 |
+
+ ". Returning the minimum."
|
644 |
+
)
|
645 |
+
|
646 |
+
return num_threads
|
647 |
+
|
648 |
+
def _check_params(self, limits, user_api):
|
649 |
+
"""Suitable values for the _limits, _user_api and _prefixes attributes"""
|
650 |
+
|
651 |
+
if isinstance(limits, str) and limits == "sequential_blas_under_openmp":
|
652 |
+
(
|
653 |
+
limits,
|
654 |
+
user_api,
|
655 |
+
) = self._controller._get_params_for_sequential_blas_under_openmp().values()
|
656 |
+
|
657 |
+
if limits is None or isinstance(limits, int):
|
658 |
+
if user_api is None:
|
659 |
+
user_api = _ALL_USER_APIS
|
660 |
+
elif user_api in _ALL_USER_APIS:
|
661 |
+
user_api = [user_api]
|
662 |
+
else:
|
663 |
+
raise ValueError(
|
664 |
+
f"user_api must be either in {_ALL_USER_APIS} or None. Got "
|
665 |
+
f"{user_api} instead."
|
666 |
+
)
|
667 |
+
|
668 |
+
if limits is not None:
|
669 |
+
limits = {api: limits for api in user_api}
|
670 |
+
prefixes = []
|
671 |
+
else:
|
672 |
+
if isinstance(limits, list):
|
673 |
+
# This should be a list of dicts of library info, for
|
674 |
+
# compatibility with the result from threadpool_info.
|
675 |
+
limits = {
|
676 |
+
lib_info["prefix"]: lib_info["num_threads"] for lib_info in limits
|
677 |
+
}
|
678 |
+
elif isinstance(limits, ThreadpoolController):
|
679 |
+
# To set the limits from the library controllers of a
|
680 |
+
# ThreadpoolController object.
|
681 |
+
limits = {
|
682 |
+
lib_controller.prefix: lib_controller.num_threads
|
683 |
+
for lib_controller in limits.lib_controllers
|
684 |
+
}
|
685 |
+
|
686 |
+
if not isinstance(limits, dict):
|
687 |
+
raise TypeError(
|
688 |
+
"limits must either be an int, a list, a dict, or "
|
689 |
+
f"'sequential_blas_under_openmp'. Got {type(limits)} instead"
|
690 |
+
)
|
691 |
+
|
692 |
+
# With a dictionary, can set both specific limit for given
|
693 |
+
# libraries and global limit for user_api. Fetch each separately.
|
694 |
+
prefixes = [prefix for prefix in limits if prefix in _ALL_PREFIXES]
|
695 |
+
user_api = [api for api in limits if api in _ALL_USER_APIS]
|
696 |
+
|
697 |
+
return limits, user_api, prefixes
|
698 |
+
|
699 |
+
def _set_threadpool_limits(self):
|
700 |
+
"""Change the maximal number of threads in selected thread pools.
|
701 |
+
|
702 |
+
Return a list with all the supported libraries that have been found
|
703 |
+
matching `self._prefixes` and `self._user_api`.
|
704 |
+
"""
|
705 |
+
if self._limits is None:
|
706 |
+
return
|
707 |
+
|
708 |
+
for lib_controller in self._controller.lib_controllers:
|
709 |
+
# self._limits is a dict {key: num_threads} where key is either
|
710 |
+
# a prefix or a user_api. If a library matches both, the limit
|
711 |
+
# corresponding to the prefix is chosen.
|
712 |
+
if lib_controller.prefix in self._limits:
|
713 |
+
num_threads = self._limits[lib_controller.prefix]
|
714 |
+
elif lib_controller.user_api in self._limits:
|
715 |
+
num_threads = self._limits[lib_controller.user_api]
|
716 |
+
else:
|
717 |
+
continue
|
718 |
+
|
719 |
+
if num_threads is not None:
|
720 |
+
lib_controller.set_num_threads(num_threads)
|
721 |
+
|
722 |
+
|
723 |
+
class _ThreadpoolLimiterDecorator(_ThreadpoolLimiter, ContextDecorator):
|
724 |
+
"""Same as _ThreadpoolLimiter but to be used as a decorator"""
|
725 |
+
|
726 |
+
def __init__(self, controller, *, limits=None, user_api=None):
|
727 |
+
self._limits, self._user_api, self._prefixes = self._check_params(
|
728 |
+
limits, user_api
|
729 |
+
)
|
730 |
+
self._controller = controller
|
731 |
+
|
732 |
+
def __enter__(self):
|
733 |
+
# we need to set the limits here and not in the __init__ because we want the
|
734 |
+
# limits to be set when calling the decorated function, not when creating the
|
735 |
+
# decorator.
|
736 |
+
self._original_info = self._controller.info()
|
737 |
+
self._set_threadpool_limits()
|
738 |
+
return self
|
739 |
+
|
740 |
+
|
741 |
+
@_format_docstring(
|
742 |
+
USER_APIS=", ".join(f'"{api}"' for api in _ALL_USER_APIS),
|
743 |
+
BLAS_LIBS=", ".join(_ALL_BLAS_LIBRARIES),
|
744 |
+
OPENMP_LIBS=", ".join(_ALL_OPENMP_LIBRARIES),
|
745 |
+
)
|
746 |
+
class threadpool_limits(_ThreadpoolLimiter):
|
747 |
+
"""Change the maximal number of threads that can be used in thread pools.
|
748 |
+
|
749 |
+
This object can be used either as a callable (the construction of this object
|
750 |
+
limits the number of threads), as a context manager in a `with` block to
|
751 |
+
automatically restore the original state of the controlled libraries when exiting
|
752 |
+
the block, or as a decorator through its `wrap` method.
|
753 |
+
|
754 |
+
Set the maximal number of threads that can be used in thread pools used in
|
755 |
+
the supported libraries to `limit`. This function works for libraries that
|
756 |
+
are already loaded in the interpreter and can be changed dynamically.
|
757 |
+
|
758 |
+
This effect is global and impacts the whole Python process. There is no thread level
|
759 |
+
isolation as these libraries do not offer thread-local APIs to configure the number
|
760 |
+
of threads to use in nested parallel calls.
|
761 |
+
|
762 |
+
Parameters
|
763 |
+
----------
|
764 |
+
limits : int, dict, 'sequential_blas_under_openmp' or None (default=None)
|
765 |
+
The maximal number of threads that can be used in thread pools
|
766 |
+
|
767 |
+
- If int, sets the maximum number of threads to `limits` for each
|
768 |
+
library selected by `user_api`.
|
769 |
+
|
770 |
+
- If it is a dictionary `{{key: max_threads}}`, this function sets a
|
771 |
+
custom maximum number of threads for each `key` which can be either a
|
772 |
+
`user_api` or a `prefix` for a specific library.
|
773 |
+
|
774 |
+
- If 'sequential_blas_under_openmp', it will chose the appropriate `limits`
|
775 |
+
and `user_api` parameters for the specific use case of sequential BLAS
|
776 |
+
calls within an OpenMP parallel region. The `user_api` parameter is
|
777 |
+
ignored.
|
778 |
+
|
779 |
+
- If None, this function does not do anything.
|
780 |
+
|
781 |
+
user_api : {USER_APIS} or None (default=None)
|
782 |
+
APIs of libraries to limit. Used only if `limits` is an int.
|
783 |
+
|
784 |
+
- If "blas", it will only limit BLAS supported libraries ({BLAS_LIBS}).
|
785 |
+
|
786 |
+
- If "openmp", it will only limit OpenMP supported libraries
|
787 |
+
({OPENMP_LIBS}). Note that it can affect the number of threads used
|
788 |
+
by the BLAS libraries if they rely on OpenMP.
|
789 |
+
|
790 |
+
- If None, this function will apply to all supported libraries.
|
791 |
+
"""
|
792 |
+
|
793 |
+
def __init__(self, limits=None, user_api=None):
|
794 |
+
super().__init__(ThreadpoolController(), limits=limits, user_api=user_api)
|
795 |
+
|
796 |
+
@classmethod
|
797 |
+
def wrap(cls, limits=None, user_api=None):
|
798 |
+
return super().wrap(ThreadpoolController(), limits=limits, user_api=user_api)
|
799 |
+
|
800 |
+
|
801 |
+
class ThreadpoolController:
|
802 |
+
"""Collection of LibController objects for all loaded supported libraries
|
803 |
+
|
804 |
+
Attributes
|
805 |
+
----------
|
806 |
+
lib_controllers : list of `LibController` objects
|
807 |
+
The list of library controllers of all loaded supported libraries.
|
808 |
+
"""
|
809 |
+
|
810 |
+
# Cache for libc under POSIX and a few system libraries under Windows.
|
811 |
+
# We use a class level cache instead of an instance level cache because
|
812 |
+
# it's very unlikely that a shared library will be unloaded and reloaded
|
813 |
+
# during the lifetime of a program.
|
814 |
+
_system_libraries = dict()
|
815 |
+
|
816 |
+
def __init__(self):
|
817 |
+
self.lib_controllers = []
|
818 |
+
self._load_libraries()
|
819 |
+
self._warn_if_incompatible_openmp()
|
820 |
+
|
821 |
+
@classmethod
|
822 |
+
def _from_controllers(cls, lib_controllers):
|
823 |
+
new_controller = cls.__new__(cls)
|
824 |
+
new_controller.lib_controllers = lib_controllers
|
825 |
+
return new_controller
|
826 |
+
|
827 |
+
def info(self):
|
828 |
+
"""Return lib_controllers info as a list of dicts"""
|
829 |
+
return [lib_controller.info() for lib_controller in self.lib_controllers]
|
830 |
+
|
831 |
+
def select(self, **kwargs):
|
832 |
+
"""Return a ThreadpoolController containing a subset of its current
|
833 |
+
library controllers
|
834 |
+
|
835 |
+
It will select all libraries matching at least one pair (key, value) from kwargs
|
836 |
+
where key is an entry of the library info dict (like "user_api", "internal_api",
|
837 |
+
"prefix", ...) and value is the value or a list of acceptable values for that
|
838 |
+
entry.
|
839 |
+
|
840 |
+
For instance, `ThreadpoolController().select(internal_api=["blis", "openblas"])`
|
841 |
+
will select all library controllers whose internal_api is either "blis" or
|
842 |
+
"openblas".
|
843 |
+
"""
|
844 |
+
for key, vals in kwargs.items():
|
845 |
+
kwargs[key] = [vals] if not isinstance(vals, list) else vals
|
846 |
+
|
847 |
+
lib_controllers = [
|
848 |
+
lib_controller
|
849 |
+
for lib_controller in self.lib_controllers
|
850 |
+
if any(
|
851 |
+
getattr(lib_controller, key, None) in vals
|
852 |
+
for key, vals in kwargs.items()
|
853 |
+
)
|
854 |
+
]
|
855 |
+
|
856 |
+
return ThreadpoolController._from_controllers(lib_controllers)
|
857 |
+
|
858 |
+
def _get_params_for_sequential_blas_under_openmp(self):
|
859 |
+
"""Return appropriate params to use for a sequential BLAS call in an OpenMP loop
|
860 |
+
|
861 |
+
This function takes into account the unexpected behavior of OpenBLAS with the
|
862 |
+
OpenMP threading layer.
|
863 |
+
"""
|
864 |
+
if self.select(
|
865 |
+
internal_api="openblas", threading_layer="openmp"
|
866 |
+
).lib_controllers:
|
867 |
+
return {"limits": None, "user_api": None}
|
868 |
+
return {"limits": 1, "user_api": "blas"}
|
869 |
+
|
870 |
+
@_format_docstring(
|
871 |
+
USER_APIS=", ".join('"{}"'.format(api) for api in _ALL_USER_APIS),
|
872 |
+
BLAS_LIBS=", ".join(_ALL_BLAS_LIBRARIES),
|
873 |
+
OPENMP_LIBS=", ".join(_ALL_OPENMP_LIBRARIES),
|
874 |
+
)
|
875 |
+
def limit(self, *, limits=None, user_api=None):
|
876 |
+
"""Change the maximal number of threads that can be used in thread pools.
|
877 |
+
|
878 |
+
This function returns an object that can be used either as a callable (the
|
879 |
+
construction of this object limits the number of threads) or as a context
|
880 |
+
manager, in a `with` block to automatically restore the original state of the
|
881 |
+
controlled libraries when exiting the block.
|
882 |
+
|
883 |
+
Set the maximal number of threads that can be used in thread pools used in
|
884 |
+
the supported libraries to `limits`. This function works for libraries that
|
885 |
+
are already loaded in the interpreter and can be changed dynamically.
|
886 |
+
|
887 |
+
This effect is global and impacts the whole Python process. There is no thread
|
888 |
+
level isolation as these libraries do not offer thread-local APIs to configure
|
889 |
+
the number of threads to use in nested parallel calls.
|
890 |
+
|
891 |
+
Parameters
|
892 |
+
----------
|
893 |
+
limits : int, dict, 'sequential_blas_under_openmp' or None (default=None)
|
894 |
+
The maximal number of threads that can be used in thread pools
|
895 |
+
|
896 |
+
- If int, sets the maximum number of threads to `limits` for each
|
897 |
+
library selected by `user_api`.
|
898 |
+
|
899 |
+
- If it is a dictionary `{{key: max_threads}}`, this function sets a
|
900 |
+
custom maximum number of threads for each `key` which can be either a
|
901 |
+
`user_api` or a `prefix` for a specific library.
|
902 |
+
|
903 |
+
- If 'sequential_blas_under_openmp', it will chose the appropriate `limits`
|
904 |
+
and `user_api` parameters for the specific use case of sequential BLAS
|
905 |
+
calls within an OpenMP parallel region. The `user_api` parameter is
|
906 |
+
ignored.
|
907 |
+
|
908 |
+
- If None, this function does not do anything.
|
909 |
+
|
910 |
+
user_api : {USER_APIS} or None (default=None)
|
911 |
+
APIs of libraries to limit. Used only if `limits` is an int.
|
912 |
+
|
913 |
+
- If "blas", it will only limit BLAS supported libraries ({BLAS_LIBS}).
|
914 |
+
|
915 |
+
- If "openmp", it will only limit OpenMP supported libraries
|
916 |
+
({OPENMP_LIBS}). Note that it can affect the number of threads used
|
917 |
+
by the BLAS libraries if they rely on OpenMP.
|
918 |
+
|
919 |
+
- If None, this function will apply to all supported libraries.
|
920 |
+
"""
|
921 |
+
return _ThreadpoolLimiter(self, limits=limits, user_api=user_api)
|
922 |
+
|
923 |
+
@_format_docstring(
|
924 |
+
USER_APIS=", ".join('"{}"'.format(api) for api in _ALL_USER_APIS),
|
925 |
+
BLAS_LIBS=", ".join(_ALL_BLAS_LIBRARIES),
|
926 |
+
OPENMP_LIBS=", ".join(_ALL_OPENMP_LIBRARIES),
|
927 |
+
)
|
928 |
+
def wrap(self, *, limits=None, user_api=None):
|
929 |
+
"""Change the maximal number of threads that can be used in thread pools.
|
930 |
+
|
931 |
+
This function returns an object that can be used as a decorator.
|
932 |
+
|
933 |
+
Set the maximal number of threads that can be used in thread pools used in
|
934 |
+
the supported libraries to `limits`. This function works for libraries that
|
935 |
+
are already loaded in the interpreter and can be changed dynamically.
|
936 |
+
|
937 |
+
Parameters
|
938 |
+
----------
|
939 |
+
limits : int, dict or None (default=None)
|
940 |
+
The maximal number of threads that can be used in thread pools
|
941 |
+
|
942 |
+
- If int, sets the maximum number of threads to `limits` for each
|
943 |
+
library selected by `user_api`.
|
944 |
+
|
945 |
+
- If it is a dictionary `{{key: max_threads}}`, this function sets a
|
946 |
+
custom maximum number of threads for each `key` which can be either a
|
947 |
+
`user_api` or a `prefix` for a specific library.
|
948 |
+
|
949 |
+
- If None, this function does not do anything.
|
950 |
+
|
951 |
+
user_api : {USER_APIS} or None (default=None)
|
952 |
+
APIs of libraries to limit. Used only if `limits` is an int.
|
953 |
+
|
954 |
+
- If "blas", it will only limit BLAS supported libraries ({BLAS_LIBS}).
|
955 |
+
|
956 |
+
- If "openmp", it will only limit OpenMP supported libraries
|
957 |
+
({OPENMP_LIBS}). Note that it can affect the number of threads used
|
958 |
+
by the BLAS libraries if they rely on OpenMP.
|
959 |
+
|
960 |
+
- If None, this function will apply to all supported libraries.
|
961 |
+
"""
|
962 |
+
return _ThreadpoolLimiter.wrap(self, limits=limits, user_api=user_api)
|
963 |
+
|
964 |
+
def __len__(self):
|
965 |
+
return len(self.lib_controllers)
|
966 |
+
|
967 |
+
def _load_libraries(self):
|
968 |
+
"""Loop through loaded shared libraries and store the supported ones"""
|
969 |
+
if sys.platform == "darwin":
|
970 |
+
self._find_libraries_with_dyld()
|
971 |
+
elif sys.platform == "win32":
|
972 |
+
self._find_libraries_with_enum_process_module_ex()
|
973 |
+
elif "pyodide" in sys.modules:
|
974 |
+
self._find_libraries_pyodide()
|
975 |
+
else:
|
976 |
+
self._find_libraries_with_dl_iterate_phdr()
|
977 |
+
|
978 |
+
def _find_libraries_with_dl_iterate_phdr(self):
|
979 |
+
"""Loop through loaded libraries and return binders on supported ones
|
980 |
+
|
981 |
+
This function is expected to work on POSIX system only.
|
982 |
+
This code is adapted from code by Intel developer @anton-malakhov
|
983 |
+
available at https://github.com/IntelPython/smp
|
984 |
+
|
985 |
+
Copyright (c) 2017, Intel Corporation published under the BSD 3-Clause
|
986 |
+
license
|
987 |
+
"""
|
988 |
+
libc = self._get_libc()
|
989 |
+
if not hasattr(libc, "dl_iterate_phdr"): # pragma: no cover
|
990 |
+
warnings.warn(
|
991 |
+
"Could not find dl_iterate_phdr in the C standard library.",
|
992 |
+
RuntimeWarning,
|
993 |
+
)
|
994 |
+
return []
|
995 |
+
|
996 |
+
# Callback function for `dl_iterate_phdr` which is called for every
|
997 |
+
# library loaded in the current process until it returns 1.
|
998 |
+
def match_library_callback(info, size, data):
|
999 |
+
# Get the path of the current library
|
1000 |
+
filepath = info.contents.dlpi_name
|
1001 |
+
if filepath:
|
1002 |
+
filepath = filepath.decode("utf-8")
|
1003 |
+
|
1004 |
+
# Store the library controller if it is supported and selected
|
1005 |
+
self._make_controller_from_path(filepath)
|
1006 |
+
return 0
|
1007 |
+
|
1008 |
+
c_func_signature = ctypes.CFUNCTYPE(
|
1009 |
+
ctypes.c_int, # Return type
|
1010 |
+
ctypes.POINTER(_dl_phdr_info),
|
1011 |
+
ctypes.c_size_t,
|
1012 |
+
ctypes.c_char_p,
|
1013 |
+
)
|
1014 |
+
c_match_library_callback = c_func_signature(match_library_callback)
|
1015 |
+
|
1016 |
+
data = ctypes.c_char_p(b"")
|
1017 |
+
libc.dl_iterate_phdr(c_match_library_callback, data)
|
1018 |
+
|
1019 |
+
def _find_libraries_with_dyld(self):
|
1020 |
+
"""Loop through loaded libraries and return binders on supported ones
|
1021 |
+
|
1022 |
+
This function is expected to work on OSX system only
|
1023 |
+
"""
|
1024 |
+
libc = self._get_libc()
|
1025 |
+
if not hasattr(libc, "_dyld_image_count"): # pragma: no cover
|
1026 |
+
warnings.warn(
|
1027 |
+
"Could not find _dyld_image_count in the C standard library.",
|
1028 |
+
RuntimeWarning,
|
1029 |
+
)
|
1030 |
+
return []
|
1031 |
+
|
1032 |
+
n_dyld = libc._dyld_image_count()
|
1033 |
+
libc._dyld_get_image_name.restype = ctypes.c_char_p
|
1034 |
+
|
1035 |
+
for i in range(n_dyld):
|
1036 |
+
filepath = ctypes.string_at(libc._dyld_get_image_name(i))
|
1037 |
+
filepath = filepath.decode("utf-8")
|
1038 |
+
|
1039 |
+
# Store the library controller if it is supported and selected
|
1040 |
+
self._make_controller_from_path(filepath)
|
1041 |
+
|
1042 |
+
def _find_libraries_with_enum_process_module_ex(self):
|
1043 |
+
"""Loop through loaded libraries and return binders on supported ones
|
1044 |
+
|
1045 |
+
This function is expected to work on windows system only.
|
1046 |
+
This code is adapted from code by Philipp Hagemeister @phihag available
|
1047 |
+
at https://stackoverflow.com/questions/17474574
|
1048 |
+
"""
|
1049 |
+
from ctypes.wintypes import DWORD, HMODULE, MAX_PATH
|
1050 |
+
|
1051 |
+
PROCESS_QUERY_INFORMATION = 0x0400
|
1052 |
+
PROCESS_VM_READ = 0x0010
|
1053 |
+
|
1054 |
+
LIST_LIBRARIES_ALL = 0x03
|
1055 |
+
|
1056 |
+
ps_api = self._get_windll("Psapi")
|
1057 |
+
kernel_32 = self._get_windll("kernel32")
|
1058 |
+
|
1059 |
+
h_process = kernel_32.OpenProcess(
|
1060 |
+
PROCESS_QUERY_INFORMATION | PROCESS_VM_READ, False, os.getpid()
|
1061 |
+
)
|
1062 |
+
if not h_process: # pragma: no cover
|
1063 |
+
raise OSError(f"Could not open PID {os.getpid()}")
|
1064 |
+
|
1065 |
+
try:
|
1066 |
+
buf_count = 256
|
1067 |
+
needed = DWORD()
|
1068 |
+
# Grow the buffer until it becomes large enough to hold all the
|
1069 |
+
# module headers
|
1070 |
+
while True:
|
1071 |
+
buf = (HMODULE * buf_count)()
|
1072 |
+
buf_size = ctypes.sizeof(buf)
|
1073 |
+
if not ps_api.EnumProcessModulesEx(
|
1074 |
+
h_process,
|
1075 |
+
ctypes.byref(buf),
|
1076 |
+
buf_size,
|
1077 |
+
ctypes.byref(needed),
|
1078 |
+
LIST_LIBRARIES_ALL,
|
1079 |
+
):
|
1080 |
+
raise OSError("EnumProcessModulesEx failed")
|
1081 |
+
if buf_size >= needed.value:
|
1082 |
+
break
|
1083 |
+
buf_count = needed.value // (buf_size // buf_count)
|
1084 |
+
|
1085 |
+
count = needed.value // (buf_size // buf_count)
|
1086 |
+
h_modules = map(HMODULE, buf[:count])
|
1087 |
+
|
1088 |
+
# Loop through all the module headers and get the library path
|
1089 |
+
buf = ctypes.create_unicode_buffer(MAX_PATH)
|
1090 |
+
n_size = DWORD()
|
1091 |
+
for h_module in h_modules:
|
1092 |
+
# Get the path of the current module
|
1093 |
+
if not ps_api.GetModuleFileNameExW(
|
1094 |
+
h_process, h_module, ctypes.byref(buf), ctypes.byref(n_size)
|
1095 |
+
):
|
1096 |
+
raise OSError("GetModuleFileNameEx failed")
|
1097 |
+
filepath = buf.value
|
1098 |
+
|
1099 |
+
# Store the library controller if it is supported and selected
|
1100 |
+
self._make_controller_from_path(filepath)
|
1101 |
+
finally:
|
1102 |
+
kernel_32.CloseHandle(h_process)
|
1103 |
+
|
1104 |
+
def _find_libraries_pyodide(self):
|
1105 |
+
"""Pyodide specific implementation for finding loaded libraries.
|
1106 |
+
|
1107 |
+
Adapted from suggestion in https://github.com/joblib/threadpoolctl/pull/169#issuecomment-1946696449.
|
1108 |
+
|
1109 |
+
One day, we may have a simpler solution. libc dl_iterate_phdr needs to
|
1110 |
+
be implemented in Emscripten and exposed in Pyodide, see
|
1111 |
+
https://github.com/emscripten-core/emscripten/issues/21354 for more
|
1112 |
+
details.
|
1113 |
+
"""
|
1114 |
+
try:
|
1115 |
+
from pyodide_js._module import LDSO
|
1116 |
+
except ImportError:
|
1117 |
+
warnings.warn(
|
1118 |
+
"Unable to import LDSO from pyodide_js._module. This should never "
|
1119 |
+
"happen."
|
1120 |
+
)
|
1121 |
+
return
|
1122 |
+
|
1123 |
+
for filepath in LDSO.loadedLibsByName.as_object_map():
|
1124 |
+
# Some libraries are duplicated by Pyodide and do not exist in the
|
1125 |
+
# filesystem, so we first check for the existence of the file. For
|
1126 |
+
# more details, see
|
1127 |
+
# https://github.com/joblib/threadpoolctl/pull/169#issuecomment-1947946728
|
1128 |
+
if os.path.exists(filepath):
|
1129 |
+
self._make_controller_from_path(filepath)
|
1130 |
+
|
1131 |
+
def _make_controller_from_path(self, filepath):
|
1132 |
+
"""Store a library controller if it is supported and selected"""
|
1133 |
+
# Required to resolve symlinks
|
1134 |
+
filepath = _realpath(filepath)
|
1135 |
+
# `lower` required to take account of OpenMP dll case on Windows
|
1136 |
+
# (vcomp, VCOMP, Vcomp, ...)
|
1137 |
+
filename = os.path.basename(filepath).lower()
|
1138 |
+
|
1139 |
+
# Loop through supported libraries to find if this filename corresponds
|
1140 |
+
# to a supported one.
|
1141 |
+
for controller_class in _ALL_CONTROLLERS:
|
1142 |
+
# check if filename matches a supported prefix
|
1143 |
+
prefix = self._check_prefix(filename, controller_class.filename_prefixes)
|
1144 |
+
|
1145 |
+
# filename does not match any of the prefixes of the candidate
|
1146 |
+
# library. move to next library.
|
1147 |
+
if prefix is None:
|
1148 |
+
continue
|
1149 |
+
|
1150 |
+
# workaround for BLAS libraries packaged by conda-forge on windows, which
|
1151 |
+
# are all renamed "libblas.dll". We thus have to check to which BLAS
|
1152 |
+
# implementation it actually corresponds looking for implementation
|
1153 |
+
# specific symbols.
|
1154 |
+
if prefix == "libblas":
|
1155 |
+
if filename.endswith(".dll"):
|
1156 |
+
libblas = ctypes.CDLL(filepath, _RTLD_NOLOAD)
|
1157 |
+
if not any(
|
1158 |
+
hasattr(libblas, func)
|
1159 |
+
for func in controller_class.check_symbols
|
1160 |
+
):
|
1161 |
+
continue
|
1162 |
+
else:
|
1163 |
+
# We ignore libblas on other platforms than windows because there
|
1164 |
+
# might be a libblas dso comming with openblas for instance that
|
1165 |
+
# can't be used to instantiate a pertinent LibController (many
|
1166 |
+
# symbols are missing) and would create confusion by making a
|
1167 |
+
# duplicate entry in threadpool_info.
|
1168 |
+
continue
|
1169 |
+
|
1170 |
+
# filename matches a prefix. Now we check if the library has the symbols we
|
1171 |
+
# are looking for. If none of the symbols exists, it's very likely not the
|
1172 |
+
# expected library (e.g. a library having a common prefix with one of the
|
1173 |
+
# our supported libraries). Otherwise, create and store the library
|
1174 |
+
# controller.
|
1175 |
+
lib_controller = controller_class(
|
1176 |
+
filepath=filepath, prefix=prefix, parent=self
|
1177 |
+
)
|
1178 |
+
|
1179 |
+
if filepath in (lib.filepath for lib in self.lib_controllers):
|
1180 |
+
# We already have a controller for this library.
|
1181 |
+
continue
|
1182 |
+
|
1183 |
+
if not hasattr(controller_class, "check_symbols") or any(
|
1184 |
+
hasattr(lib_controller.dynlib, func)
|
1185 |
+
for func in controller_class.check_symbols
|
1186 |
+
):
|
1187 |
+
self.lib_controllers.append(lib_controller)
|
1188 |
+
|
1189 |
+
def _check_prefix(self, library_basename, filename_prefixes):
|
1190 |
+
"""Return the prefix library_basename starts with
|
1191 |
+
|
1192 |
+
Return None if none matches.
|
1193 |
+
"""
|
1194 |
+
for prefix in filename_prefixes:
|
1195 |
+
if library_basename.startswith(prefix):
|
1196 |
+
return prefix
|
1197 |
+
return None
|
1198 |
+
|
1199 |
+
def _warn_if_incompatible_openmp(self):
|
1200 |
+
"""Raise a warning if llvm-OpenMP and intel-OpenMP are both loaded"""
|
1201 |
+
prefixes = [lib_controller.prefix for lib_controller in self.lib_controllers]
|
1202 |
+
msg = textwrap.dedent(
|
1203 |
+
"""
|
1204 |
+
Found Intel OpenMP ('libiomp') and LLVM OpenMP ('libomp') loaded at
|
1205 |
+
the same time. Both libraries are known to be incompatible and this
|
1206 |
+
can cause random crashes or deadlocks on Linux when loaded in the
|
1207 |
+
same Python program.
|
1208 |
+
Using threadpoolctl may cause crashes or deadlocks. For more
|
1209 |
+
information and possible workarounds, please see
|
1210 |
+
https://github.com/joblib/threadpoolctl/blob/master/multiple_openmp.md
|
1211 |
+
"""
|
1212 |
+
)
|
1213 |
+
if "libomp" in prefixes and "libiomp" in prefixes:
|
1214 |
+
warnings.warn(msg, RuntimeWarning)
|
1215 |
+
|
1216 |
+
@classmethod
|
1217 |
+
def _get_libc(cls):
|
1218 |
+
"""Load the lib-C for unix systems."""
|
1219 |
+
libc = cls._system_libraries.get("libc")
|
1220 |
+
if libc is None:
|
1221 |
+
# Remark: If libc is statically linked or if Python is linked against an
|
1222 |
+
# alternative implementation of libc like musl, find_library will return
|
1223 |
+
# None and CDLL will load the main program itself which should contain the
|
1224 |
+
# libc symbols. We still name it libc for convenience.
|
1225 |
+
# If the main program does not contain the libc symbols, it's ok because
|
1226 |
+
# we check their presence later anyway.
|
1227 |
+
libc = ctypes.CDLL(find_library("c"), mode=_RTLD_NOLOAD)
|
1228 |
+
cls._system_libraries["libc"] = libc
|
1229 |
+
return libc
|
1230 |
+
|
1231 |
+
@classmethod
|
1232 |
+
def _get_windll(cls, dll_name):
|
1233 |
+
"""Load a windows DLL"""
|
1234 |
+
dll = cls._system_libraries.get(dll_name)
|
1235 |
+
if dll is None:
|
1236 |
+
dll = ctypes.WinDLL(f"{dll_name}.dll")
|
1237 |
+
cls._system_libraries[dll_name] = dll
|
1238 |
+
return dll
|
1239 |
+
|
1240 |
+
|
1241 |
+
def _main():
|
1242 |
+
"""Commandline interface to display thread-pool information and exit."""
|
1243 |
+
import argparse
|
1244 |
+
import importlib
|
1245 |
+
import json
|
1246 |
+
import sys
|
1247 |
+
|
1248 |
+
parser = argparse.ArgumentParser(
|
1249 |
+
usage="python -m threadpoolctl -i numpy scipy.linalg xgboost",
|
1250 |
+
description="Display thread-pool information and exit.",
|
1251 |
+
)
|
1252 |
+
parser.add_argument(
|
1253 |
+
"-i",
|
1254 |
+
"--import",
|
1255 |
+
dest="modules",
|
1256 |
+
nargs="*",
|
1257 |
+
default=(),
|
1258 |
+
help="Python modules to import before introspecting thread-pools.",
|
1259 |
+
)
|
1260 |
+
parser.add_argument(
|
1261 |
+
"-c",
|
1262 |
+
"--command",
|
1263 |
+
help="a Python statement to execute before introspecting thread-pools.",
|
1264 |
+
)
|
1265 |
+
|
1266 |
+
options = parser.parse_args(sys.argv[1:])
|
1267 |
+
for module in options.modules:
|
1268 |
+
try:
|
1269 |
+
importlib.import_module(module, package=None)
|
1270 |
+
except ImportError:
|
1271 |
+
print("WARNING: could not import", module, file=sys.stderr)
|
1272 |
+
|
1273 |
+
if options.command:
|
1274 |
+
exec(options.command)
|
1275 |
+
|
1276 |
+
print(json.dumps(threadpool_info(), indent=2))
|
1277 |
+
|
1278 |
+
|
1279 |
+
if __name__ == "__main__":
|
1280 |
+
_main()
|
llmeval-env/lib/python3.10/site-packages/tqdm-4.66.4.dist-info/INSTALLER
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
pip
|
llmeval-env/lib/python3.10/site-packages/tqdm-4.66.4.dist-info/METADATA
ADDED
@@ -0,0 +1,1590 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Metadata-Version: 2.1
|
2 |
+
Name: tqdm
|
3 |
+
Version: 4.66.4
|
4 |
+
Summary: Fast, Extensible Progress Meter
|
5 |
+
Maintainer-email: tqdm developers <[email protected]>
|
6 |
+
License: MPL-2.0 AND MIT
|
7 |
+
Project-URL: homepage, https://tqdm.github.io
|
8 |
+
Project-URL: repository, https://github.com/tqdm/tqdm
|
9 |
+
Project-URL: changelog, https://tqdm.github.io/releases
|
10 |
+
Project-URL: wiki, https://github.com/tqdm/tqdm/wiki
|
11 |
+
Keywords: progressbar,progressmeter,progress,bar,meter,rate,eta,console,terminal,time
|
12 |
+
Classifier: Development Status :: 5 - Production/Stable
|
13 |
+
Classifier: Environment :: Console
|
14 |
+
Classifier: Environment :: MacOS X
|
15 |
+
Classifier: Environment :: Other Environment
|
16 |
+
Classifier: Environment :: Win32 (MS Windows)
|
17 |
+
Classifier: Environment :: X11 Applications
|
18 |
+
Classifier: Framework :: IPython
|
19 |
+
Classifier: Framework :: Jupyter
|
20 |
+
Classifier: Intended Audience :: Developers
|
21 |
+
Classifier: Intended Audience :: Education
|
22 |
+
Classifier: Intended Audience :: End Users/Desktop
|
23 |
+
Classifier: Intended Audience :: Other Audience
|
24 |
+
Classifier: Intended Audience :: System Administrators
|
25 |
+
Classifier: License :: OSI Approved :: MIT License
|
26 |
+
Classifier: License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)
|
27 |
+
Classifier: Operating System :: MacOS
|
28 |
+
Classifier: Operating System :: MacOS :: MacOS X
|
29 |
+
Classifier: Operating System :: Microsoft
|
30 |
+
Classifier: Operating System :: Microsoft :: MS-DOS
|
31 |
+
Classifier: Operating System :: Microsoft :: Windows
|
32 |
+
Classifier: Operating System :: POSIX
|
33 |
+
Classifier: Operating System :: POSIX :: BSD
|
34 |
+
Classifier: Operating System :: POSIX :: BSD :: FreeBSD
|
35 |
+
Classifier: Operating System :: POSIX :: Linux
|
36 |
+
Classifier: Operating System :: POSIX :: SunOS/Solaris
|
37 |
+
Classifier: Operating System :: Unix
|
38 |
+
Classifier: Programming Language :: Python
|
39 |
+
Classifier: Programming Language :: Python :: 3
|
40 |
+
Classifier: Programming Language :: Python :: 3.7
|
41 |
+
Classifier: Programming Language :: Python :: 3.8
|
42 |
+
Classifier: Programming Language :: Python :: 3.9
|
43 |
+
Classifier: Programming Language :: Python :: 3.10
|
44 |
+
Classifier: Programming Language :: Python :: 3.11
|
45 |
+
Classifier: Programming Language :: Python :: 3 :: Only
|
46 |
+
Classifier: Programming Language :: Python :: Implementation
|
47 |
+
Classifier: Programming Language :: Python :: Implementation :: IronPython
|
48 |
+
Classifier: Programming Language :: Python :: Implementation :: PyPy
|
49 |
+
Classifier: Programming Language :: Unix Shell
|
50 |
+
Classifier: Topic :: Desktop Environment
|
51 |
+
Classifier: Topic :: Education :: Computer Aided Instruction (CAI)
|
52 |
+
Classifier: Topic :: Education :: Testing
|
53 |
+
Classifier: Topic :: Office/Business
|
54 |
+
Classifier: Topic :: Other/Nonlisted Topic
|
55 |
+
Classifier: Topic :: Software Development :: Build Tools
|
56 |
+
Classifier: Topic :: Software Development :: Libraries
|
57 |
+
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
58 |
+
Classifier: Topic :: Software Development :: Pre-processors
|
59 |
+
Classifier: Topic :: Software Development :: User Interfaces
|
60 |
+
Classifier: Topic :: System :: Installation/Setup
|
61 |
+
Classifier: Topic :: System :: Logging
|
62 |
+
Classifier: Topic :: System :: Monitoring
|
63 |
+
Classifier: Topic :: System :: Shells
|
64 |
+
Classifier: Topic :: Terminals
|
65 |
+
Classifier: Topic :: Utilities
|
66 |
+
Requires-Python: >=3.7
|
67 |
+
Description-Content-Type: text/x-rst
|
68 |
+
License-File: LICENCE
|
69 |
+
Requires-Dist: colorama ; platform_system == "Windows"
|
70 |
+
Provides-Extra: dev
|
71 |
+
Requires-Dist: pytest >=6 ; extra == 'dev'
|
72 |
+
Requires-Dist: pytest-cov ; extra == 'dev'
|
73 |
+
Requires-Dist: pytest-timeout ; extra == 'dev'
|
74 |
+
Requires-Dist: pytest-xdist ; extra == 'dev'
|
75 |
+
Provides-Extra: notebook
|
76 |
+
Requires-Dist: ipywidgets >=6 ; extra == 'notebook'
|
77 |
+
Provides-Extra: slack
|
78 |
+
Requires-Dist: slack-sdk ; extra == 'slack'
|
79 |
+
Provides-Extra: telegram
|
80 |
+
Requires-Dist: requests ; extra == 'telegram'
|
81 |
+
|
82 |
+
|Logo|
|
83 |
+
|
84 |
+
tqdm
|
85 |
+
====
|
86 |
+
|
87 |
+
|Py-Versions| |Versions| |Conda-Forge-Status| |Docker| |Snapcraft|
|
88 |
+
|
89 |
+
|Build-Status| |Coverage-Status| |Branch-Coverage-Status| |Codacy-Grade| |Libraries-Rank| |PyPI-Downloads|
|
90 |
+
|
91 |
+
|LICENCE| |OpenHub-Status| |binder-demo| |awesome-python|
|
92 |
+
|
93 |
+
``tqdm`` derives from the Arabic word *taqaddum* (تقدّم) which can mean "progress,"
|
94 |
+
and is an abbreviation for "I love you so much" in Spanish (*te quiero demasiado*).
|
95 |
+
|
96 |
+
Instantly make your loops show a smart progress meter - just wrap any
|
97 |
+
iterable with ``tqdm(iterable)``, and you're done!
|
98 |
+
|
99 |
+
.. code:: python
|
100 |
+
|
101 |
+
from tqdm import tqdm
|
102 |
+
for i in tqdm(range(10000)):
|
103 |
+
...
|
104 |
+
|
105 |
+
``76%|████████████████████████ | 7568/10000 [00:33<00:10, 229.00it/s]``
|
106 |
+
|
107 |
+
``trange(N)`` can be also used as a convenient shortcut for
|
108 |
+
``tqdm(range(N))``.
|
109 |
+
|
110 |
+
|Screenshot|
|
111 |
+
|Video| |Slides| |Merch|
|
112 |
+
|
113 |
+
It can also be executed as a module with pipes:
|
114 |
+
|
115 |
+
.. code:: sh
|
116 |
+
|
117 |
+
$ seq 9999999 | tqdm --bytes | wc -l
|
118 |
+
75.2MB [00:00, 217MB/s]
|
119 |
+
9999999
|
120 |
+
|
121 |
+
$ tar -zcf - docs/ | tqdm --bytes --total `du -sb docs/ | cut -f1` \
|
122 |
+
> backup.tgz
|
123 |
+
32%|██████████▍ | 8.89G/27.9G [00:42<01:31, 223MB/s]
|
124 |
+
|
125 |
+
Overhead is low -- about 60ns per iteration (80ns with ``tqdm.gui``), and is
|
126 |
+
unit tested against performance regression.
|
127 |
+
By comparison, the well-established
|
128 |
+
`ProgressBar <https://github.com/niltonvolpato/python-progressbar>`__ has
|
129 |
+
an 800ns/iter overhead.
|
130 |
+
|
131 |
+
In addition to its low overhead, ``tqdm`` uses smart algorithms to predict
|
132 |
+
the remaining time and to skip unnecessary iteration displays, which allows
|
133 |
+
for a negligible overhead in most cases.
|
134 |
+
|
135 |
+
``tqdm`` works on any platform
|
136 |
+
(Linux, Windows, Mac, FreeBSD, NetBSD, Solaris/SunOS),
|
137 |
+
in any console or in a GUI, and is also friendly with IPython/Jupyter notebooks.
|
138 |
+
|
139 |
+
``tqdm`` does not require any dependencies (not even ``curses``!), just
|
140 |
+
Python and an environment supporting ``carriage return \r`` and
|
141 |
+
``line feed \n`` control characters.
|
142 |
+
|
143 |
+
------------------------------------------
|
144 |
+
|
145 |
+
.. contents:: Table of contents
|
146 |
+
:backlinks: top
|
147 |
+
:local:
|
148 |
+
|
149 |
+
|
150 |
+
Installation
|
151 |
+
------------
|
152 |
+
|
153 |
+
Latest PyPI stable release
|
154 |
+
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
155 |
+
|
156 |
+
|Versions| |PyPI-Downloads| |Libraries-Dependents|
|
157 |
+
|
158 |
+
.. code:: sh
|
159 |
+
|
160 |
+
pip install tqdm
|
161 |
+
|
162 |
+
Latest development release on GitHub
|
163 |
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
164 |
+
|
165 |
+
|GitHub-Status| |GitHub-Stars| |GitHub-Commits| |GitHub-Forks| |GitHub-Updated|
|
166 |
+
|
167 |
+
Pull and install pre-release ``devel`` branch:
|
168 |
+
|
169 |
+
.. code:: sh
|
170 |
+
|
171 |
+
pip install "git+https://github.com/tqdm/tqdm.git@devel#egg=tqdm"
|
172 |
+
|
173 |
+
Latest Conda release
|
174 |
+
~~~~~~~~~~~~~~~~~~~~
|
175 |
+
|
176 |
+
|Conda-Forge-Status|
|
177 |
+
|
178 |
+
.. code:: sh
|
179 |
+
|
180 |
+
conda install -c conda-forge tqdm
|
181 |
+
|
182 |
+
Latest Snapcraft release
|
183 |
+
~~~~~~~~~~~~~~~~~~~~~~~~
|
184 |
+
|
185 |
+
|Snapcraft|
|
186 |
+
|
187 |
+
There are 3 channels to choose from:
|
188 |
+
|
189 |
+
.. code:: sh
|
190 |
+
|
191 |
+
snap install tqdm # implies --stable, i.e. latest tagged release
|
192 |
+
snap install tqdm --candidate # master branch
|
193 |
+
snap install tqdm --edge # devel branch
|
194 |
+
|
195 |
+
Note that ``snap`` binaries are purely for CLI use (not ``import``-able), and
|
196 |
+
automatically set up ``bash`` tab-completion.
|
197 |
+
|
198 |
+
Latest Docker release
|
199 |
+
~~~~~~~~~~~~~~~~~~~~~
|
200 |
+
|
201 |
+
|Docker|
|
202 |
+
|
203 |
+
.. code:: sh
|
204 |
+
|
205 |
+
docker pull tqdm/tqdm
|
206 |
+
docker run -i --rm tqdm/tqdm --help
|
207 |
+
|
208 |
+
Other
|
209 |
+
~~~~~
|
210 |
+
|
211 |
+
There are other (unofficial) places where ``tqdm`` may be downloaded, particularly for CLI use:
|
212 |
+
|
213 |
+
|Repology|
|
214 |
+
|
215 |
+
.. |Repology| image:: https://repology.org/badge/tiny-repos/python:tqdm.svg
|
216 |
+
:target: https://repology.org/project/python:tqdm/versions
|
217 |
+
|
218 |
+
Changelog
|
219 |
+
---------
|
220 |
+
|
221 |
+
The list of all changes is available either on GitHub's Releases:
|
222 |
+
|GitHub-Status|, on the
|
223 |
+
`wiki <https://github.com/tqdm/tqdm/wiki/Releases>`__, or on the
|
224 |
+
`website <https://tqdm.github.io/releases>`__.
|
225 |
+
|
226 |
+
|
227 |
+
Usage
|
228 |
+
-----
|
229 |
+
|
230 |
+
``tqdm`` is very versatile and can be used in a number of ways.
|
231 |
+
The three main ones are given below.
|
232 |
+
|
233 |
+
Iterable-based
|
234 |
+
~~~~~~~~~~~~~~
|
235 |
+
|
236 |
+
Wrap ``tqdm()`` around any iterable:
|
237 |
+
|
238 |
+
.. code:: python
|
239 |
+
|
240 |
+
from tqdm import tqdm
|
241 |
+
from time import sleep
|
242 |
+
|
243 |
+
text = ""
|
244 |
+
for char in tqdm(["a", "b", "c", "d"]):
|
245 |
+
sleep(0.25)
|
246 |
+
text = text + char
|
247 |
+
|
248 |
+
``trange(i)`` is a special optimised instance of ``tqdm(range(i))``:
|
249 |
+
|
250 |
+
.. code:: python
|
251 |
+
|
252 |
+
from tqdm import trange
|
253 |
+
|
254 |
+
for i in trange(100):
|
255 |
+
sleep(0.01)
|
256 |
+
|
257 |
+
Instantiation outside of the loop allows for manual control over ``tqdm()``:
|
258 |
+
|
259 |
+
.. code:: python
|
260 |
+
|
261 |
+
pbar = tqdm(["a", "b", "c", "d"])
|
262 |
+
for char in pbar:
|
263 |
+
sleep(0.25)
|
264 |
+
pbar.set_description("Processing %s" % char)
|
265 |
+
|
266 |
+
Manual
|
267 |
+
~~~~~~
|
268 |
+
|
269 |
+
Manual control of ``tqdm()`` updates using a ``with`` statement:
|
270 |
+
|
271 |
+
.. code:: python
|
272 |
+
|
273 |
+
with tqdm(total=100) as pbar:
|
274 |
+
for i in range(10):
|
275 |
+
sleep(0.1)
|
276 |
+
pbar.update(10)
|
277 |
+
|
278 |
+
If the optional variable ``total`` (or an iterable with ``len()``) is
|
279 |
+
provided, predictive stats are displayed.
|
280 |
+
|
281 |
+
``with`` is also optional (you can just assign ``tqdm()`` to a variable,
|
282 |
+
but in this case don't forget to ``del`` or ``close()`` at the end:
|
283 |
+
|
284 |
+
.. code:: python
|
285 |
+
|
286 |
+
pbar = tqdm(total=100)
|
287 |
+
for i in range(10):
|
288 |
+
sleep(0.1)
|
289 |
+
pbar.update(10)
|
290 |
+
pbar.close()
|
291 |
+
|
292 |
+
Module
|
293 |
+
~~~~~~
|
294 |
+
|
295 |
+
Perhaps the most wonderful use of ``tqdm`` is in a script or on the command
|
296 |
+
line. Simply inserting ``tqdm`` (or ``python -m tqdm``) between pipes will pass
|
297 |
+
through all ``stdin`` to ``stdout`` while printing progress to ``stderr``.
|
298 |
+
|
299 |
+
The example below demonstrate counting the number of lines in all Python files
|
300 |
+
in the current directory, with timing information included.
|
301 |
+
|
302 |
+
.. code:: sh
|
303 |
+
|
304 |
+
$ time find . -name '*.py' -type f -exec cat \{} \; | wc -l
|
305 |
+
857365
|
306 |
+
|
307 |
+
real 0m3.458s
|
308 |
+
user 0m0.274s
|
309 |
+
sys 0m3.325s
|
310 |
+
|
311 |
+
$ time find . -name '*.py' -type f -exec cat \{} \; | tqdm | wc -l
|
312 |
+
857366it [00:03, 246471.31it/s]
|
313 |
+
857365
|
314 |
+
|
315 |
+
real 0m3.585s
|
316 |
+
user 0m0.862s
|
317 |
+
sys 0m3.358s
|
318 |
+
|
319 |
+
Note that the usual arguments for ``tqdm`` can also be specified.
|
320 |
+
|
321 |
+
.. code:: sh
|
322 |
+
|
323 |
+
$ find . -name '*.py' -type f -exec cat \{} \; |
|
324 |
+
tqdm --unit loc --unit_scale --total 857366 >> /dev/null
|
325 |
+
100%|█████████████████████████████████| 857K/857K [00:04<00:00, 246Kloc/s]
|
326 |
+
|
327 |
+
Backing up a large directory?
|
328 |
+
|
329 |
+
.. code:: sh
|
330 |
+
|
331 |
+
$ tar -zcf - docs/ | tqdm --bytes --total `du -sb docs/ | cut -f1` \
|
332 |
+
> backup.tgz
|
333 |
+
44%|██████████████▊ | 153M/352M [00:14<00:18, 11.0MB/s]
|
334 |
+
|
335 |
+
This can be beautified further:
|
336 |
+
|
337 |
+
.. code:: sh
|
338 |
+
|
339 |
+
$ BYTES=$(du -sb docs/ | cut -f1)
|
340 |
+
$ tar -cf - docs/ \
|
341 |
+
| tqdm --bytes --total "$BYTES" --desc Processing | gzip \
|
342 |
+
| tqdm --bytes --total "$BYTES" --desc Compressed --position 1 \
|
343 |
+
> ~/backup.tgz
|
344 |
+
Processing: 100%|██████████████████████| 352M/352M [00:14<00:00, 30.2MB/s]
|
345 |
+
Compressed: 42%|█████████▎ | 148M/352M [00:14<00:19, 10.9MB/s]
|
346 |
+
|
347 |
+
Or done on a file level using 7-zip:
|
348 |
+
|
349 |
+
.. code:: sh
|
350 |
+
|
351 |
+
$ 7z a -bd -r backup.7z docs/ | grep Compressing \
|
352 |
+
| tqdm --total $(find docs/ -type f | wc -l) --unit files \
|
353 |
+
| grep -v Compressing
|
354 |
+
100%|██████████████████████████▉| 15327/15327 [01:00<00:00, 712.96files/s]
|
355 |
+
|
356 |
+
Pre-existing CLI programs already outputting basic progress information will
|
357 |
+
benefit from ``tqdm``'s ``--update`` and ``--update_to`` flags:
|
358 |
+
|
359 |
+
.. code:: sh
|
360 |
+
|
361 |
+
$ seq 3 0.1 5 | tqdm --total 5 --update_to --null
|
362 |
+
100%|████████████████████████████████████| 5.0/5 [00:00<00:00, 9673.21it/s]
|
363 |
+
$ seq 10 | tqdm --update --null # 1 + 2 + ... + 10 = 55 iterations
|
364 |
+
55it [00:00, 90006.52it/s]
|
365 |
+
|
366 |
+
FAQ and Known Issues
|
367 |
+
--------------------
|
368 |
+
|
369 |
+
|GitHub-Issues|
|
370 |
+
|
371 |
+
The most common issues relate to excessive output on multiple lines, instead
|
372 |
+
of a neat one-line progress bar.
|
373 |
+
|
374 |
+
- Consoles in general: require support for carriage return (``CR``, ``\r``).
|
375 |
+
|
376 |
+
* Some cloud logging consoles which don't support ``\r`` properly
|
377 |
+
(`cloudwatch <https://github.com/tqdm/tqdm/issues/966>`__,
|
378 |
+
`K8s <https://github.com/tqdm/tqdm/issues/1319>`__) may benefit from
|
379 |
+
``export TQDM_POSITION=-1``.
|
380 |
+
|
381 |
+
- Nested progress bars:
|
382 |
+
|
383 |
+
* Consoles in general: require support for moving cursors up to the
|
384 |
+
previous line. For example,
|
385 |
+
`IDLE <https://github.com/tqdm/tqdm/issues/191#issuecomment-230168030>`__,
|
386 |
+
`ConEmu <https://github.com/tqdm/tqdm/issues/254>`__ and
|
387 |
+
`PyCharm <https://github.com/tqdm/tqdm/issues/203>`__ (also
|
388 |
+
`here <https://github.com/tqdm/tqdm/issues/208>`__,
|
389 |
+
`here <https://github.com/tqdm/tqdm/issues/307>`__, and
|
390 |
+
`here <https://github.com/tqdm/tqdm/issues/454#issuecomment-335416815>`__)
|
391 |
+
lack full support.
|
392 |
+
* Windows: additionally may require the Python module ``colorama``
|
393 |
+
to ensure nested bars stay within their respective lines.
|
394 |
+
|
395 |
+
- Unicode:
|
396 |
+
|
397 |
+
* Environments which report that they support unicode will have solid smooth
|
398 |
+
progressbars. The fallback is an ``ascii``-only bar.
|
399 |
+
* Windows consoles often only partially support unicode and thus
|
400 |
+
`often require explicit ascii=True <https://github.com/tqdm/tqdm/issues/454#issuecomment-335416815>`__
|
401 |
+
(also `here <https://github.com/tqdm/tqdm/issues/499>`__). This is due to
|
402 |
+
either normal-width unicode characters being incorrectly displayed as
|
403 |
+
"wide", or some unicode characters not rendering.
|
404 |
+
|
405 |
+
- Wrapping generators:
|
406 |
+
|
407 |
+
* Generator wrapper functions tend to hide the length of iterables.
|
408 |
+
``tqdm`` does not.
|
409 |
+
* Replace ``tqdm(enumerate(...))`` with ``enumerate(tqdm(...))`` or
|
410 |
+
``tqdm(enumerate(x), total=len(x), ...)``.
|
411 |
+
The same applies to ``numpy.ndenumerate``.
|
412 |
+
* Replace ``tqdm(zip(a, b))`` with ``zip(tqdm(a), b)`` or even
|
413 |
+
``zip(tqdm(a), tqdm(b))``.
|
414 |
+
* The same applies to ``itertools``.
|
415 |
+
* Some useful convenience functions can be found under ``tqdm.contrib``.
|
416 |
+
|
417 |
+
- `No intermediate output in docker-compose <https://github.com/tqdm/tqdm/issues/771>`__:
|
418 |
+
use ``docker-compose run`` instead of ``docker-compose up`` and ``tty: true``.
|
419 |
+
|
420 |
+
- Overriding defaults via environment variables:
|
421 |
+
e.g. in CI/cloud jobs, ``export TQDM_MININTERVAL=5`` to avoid log spam.
|
422 |
+
This override logic is handled by the ``tqdm.utils.envwrap`` decorator
|
423 |
+
(useful independent of ``tqdm``).
|
424 |
+
|
425 |
+
If you come across any other difficulties, browse and file |GitHub-Issues|.
|
426 |
+
|
427 |
+
Documentation
|
428 |
+
-------------
|
429 |
+
|
430 |
+
|Py-Versions| |README-Hits| (Since 19 May 2016)
|
431 |
+
|
432 |
+
.. code:: python
|
433 |
+
|
434 |
+
class tqdm():
|
435 |
+
"""
|
436 |
+
Decorate an iterable object, returning an iterator which acts exactly
|
437 |
+
like the original iterable, but prints a dynamically updating
|
438 |
+
progressbar every time a value is requested.
|
439 |
+
"""
|
440 |
+
|
441 |
+
@envwrap("TQDM_") # override defaults via env vars
|
442 |
+
def __init__(self, iterable=None, desc=None, total=None, leave=True,
|
443 |
+
file=None, ncols=None, mininterval=0.1,
|
444 |
+
maxinterval=10.0, miniters=None, ascii=None, disable=False,
|
445 |
+
unit='it', unit_scale=False, dynamic_ncols=False,
|
446 |
+
smoothing=0.3, bar_format=None, initial=0, position=None,
|
447 |
+
postfix=None, unit_divisor=1000, write_bytes=False,
|
448 |
+
lock_args=None, nrows=None, colour=None, delay=0):
|
449 |
+
|
450 |
+
Parameters
|
451 |
+
~~~~~~~~~~
|
452 |
+
|
453 |
+
* iterable : iterable, optional
|
454 |
+
Iterable to decorate with a progressbar.
|
455 |
+
Leave blank to manually manage the updates.
|
456 |
+
* desc : str, optional
|
457 |
+
Prefix for the progressbar.
|
458 |
+
* total : int or float, optional
|
459 |
+
The number of expected iterations. If unspecified,
|
460 |
+
len(iterable) is used if possible. If float("inf") or as a last
|
461 |
+
resort, only basic progress statistics are displayed
|
462 |
+
(no ETA, no progressbar).
|
463 |
+
If ``gui`` is True and this parameter needs subsequent updating,
|
464 |
+
specify an initial arbitrary large positive number,
|
465 |
+
e.g. 9e9.
|
466 |
+
* leave : bool, optional
|
467 |
+
If [default: True], keeps all traces of the progressbar
|
468 |
+
upon termination of iteration.
|
469 |
+
If ``None``, will leave only if ``position`` is ``0``.
|
470 |
+
* file : ``io.TextIOWrapper`` or ``io.StringIO``, optional
|
471 |
+
Specifies where to output the progress messages
|
472 |
+
(default: sys.stderr). Uses ``file.write(str)`` and ``file.flush()``
|
473 |
+
methods. For encoding, see ``write_bytes``.
|
474 |
+
* ncols : int, optional
|
475 |
+
The width of the entire output message. If specified,
|
476 |
+
dynamically resizes the progressbar to stay within this bound.
|
477 |
+
If unspecified, attempts to use environment width. The
|
478 |
+
fallback is a meter width of 10 and no limit for the counter and
|
479 |
+
statistics. If 0, will not print any meter (only stats).
|
480 |
+
* mininterval : float, optional
|
481 |
+
Minimum progress display update interval [default: 0.1] seconds.
|
482 |
+
* maxinterval : float, optional
|
483 |
+
Maximum progress display update interval [default: 10] seconds.
|
484 |
+
Automatically adjusts ``miniters`` to correspond to ``mininterval``
|
485 |
+
after long display update lag. Only works if ``dynamic_miniters``
|
486 |
+
or monitor thread is enabled.
|
487 |
+
* miniters : int or float, optional
|
488 |
+
Minimum progress display update interval, in iterations.
|
489 |
+
If 0 and ``dynamic_miniters``, will automatically adjust to equal
|
490 |
+
``mininterval`` (more CPU efficient, good for tight loops).
|
491 |
+
If > 0, will skip display of specified number of iterations.
|
492 |
+
Tweak this and ``mininterval`` to get very efficient loops.
|
493 |
+
If your progress is erratic with both fast and slow iterations
|
494 |
+
(network, skipping items, etc) you should set miniters=1.
|
495 |
+
* ascii : bool or str, optional
|
496 |
+
If unspecified or False, use unicode (smooth blocks) to fill
|
497 |
+
the meter. The fallback is to use ASCII characters " 123456789#".
|
498 |
+
* disable : bool, optional
|
499 |
+
Whether to disable the entire progressbar wrapper
|
500 |
+
[default: False]. If set to None, disable on non-TTY.
|
501 |
+
* unit : str, optional
|
502 |
+
String that will be used to define the unit of each iteration
|
503 |
+
[default: it].
|
504 |
+
* unit_scale : bool or int or float, optional
|
505 |
+
If 1 or True, the number of iterations will be reduced/scaled
|
506 |
+
automatically and a metric prefix following the
|
507 |
+
International System of Units standard will be added
|
508 |
+
(kilo, mega, etc.) [default: False]. If any other non-zero
|
509 |
+
number, will scale ``total`` and ``n``.
|
510 |
+
* dynamic_ncols : bool, optional
|
511 |
+
If set, constantly alters ``ncols`` and ``nrows`` to the
|
512 |
+
environment (allowing for window resizes) [default: False].
|
513 |
+
* smoothing : float, optional
|
514 |
+
Exponential moving average smoothing factor for speed estimates
|
515 |
+
(ignored in GUI mode). Ranges from 0 (average speed) to 1
|
516 |
+
(current/instantaneous speed) [default: 0.3].
|
517 |
+
* bar_format : str, optional
|
518 |
+
Specify a custom bar string formatting. May impact performance.
|
519 |
+
[default: '{l_bar}{bar}{r_bar}'], where
|
520 |
+
l_bar='{desc}: {percentage:3.0f}%|' and
|
521 |
+
r_bar='| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, '
|
522 |
+
'{rate_fmt}{postfix}]'
|
523 |
+
Possible vars: l_bar, bar, r_bar, n, n_fmt, total, total_fmt,
|
524 |
+
percentage, elapsed, elapsed_s, ncols, nrows, desc, unit,
|
525 |
+
rate, rate_fmt, rate_noinv, rate_noinv_fmt,
|
526 |
+
rate_inv, rate_inv_fmt, postfix, unit_divisor,
|
527 |
+
remaining, remaining_s, eta.
|
528 |
+
Note that a trailing ": " is automatically removed after {desc}
|
529 |
+
if the latter is empty.
|
530 |
+
* initial : int or float, optional
|
531 |
+
The initial counter value. Useful when restarting a progress
|
532 |
+
bar [default: 0]. If using float, consider specifying ``{n:.3f}``
|
533 |
+
or similar in ``bar_format``, or specifying ``unit_scale``.
|
534 |
+
* position : int, optional
|
535 |
+
Specify the line offset to print this bar (starting from 0)
|
536 |
+
Automatic if unspecified.
|
537 |
+
Useful to manage multiple bars at once (eg, from threads).
|
538 |
+
* postfix : dict or ``*``, optional
|
539 |
+
Specify additional stats to display at the end of the bar.
|
540 |
+
Calls ``set_postfix(**postfix)`` if possible (dict).
|
541 |
+
* unit_divisor : float, optional
|
542 |
+
[default: 1000], ignored unless ``unit_scale`` is True.
|
543 |
+
* write_bytes : bool, optional
|
544 |
+
Whether to write bytes. If (default: False) will write unicode.
|
545 |
+
* lock_args : tuple, optional
|
546 |
+
Passed to ``refresh`` for intermediate output
|
547 |
+
(initialisation, iterating, and updating).
|
548 |
+
* nrows : int, optional
|
549 |
+
The screen height. If specified, hides nested bars outside this
|
550 |
+
bound. If unspecified, attempts to use environment height.
|
551 |
+
The fallback is 20.
|
552 |
+
* colour : str, optional
|
553 |
+
Bar colour (e.g. 'green', '#00ff00').
|
554 |
+
* delay : float, optional
|
555 |
+
Don't display until [default: 0] seconds have elapsed.
|
556 |
+
|
557 |
+
Extra CLI Options
|
558 |
+
~~~~~~~~~~~~~~~~~
|
559 |
+
|
560 |
+
* delim : chr, optional
|
561 |
+
Delimiting character [default: '\n']. Use '\0' for null.
|
562 |
+
N.B.: on Windows systems, Python converts '\n' to '\r\n'.
|
563 |
+
* buf_size : int, optional
|
564 |
+
String buffer size in bytes [default: 256]
|
565 |
+
used when ``delim`` is specified.
|
566 |
+
* bytes : bool, optional
|
567 |
+
If true, will count bytes, ignore ``delim``, and default
|
568 |
+
``unit_scale`` to True, ``unit_divisor`` to 1024, and ``unit`` to 'B'.
|
569 |
+
* tee : bool, optional
|
570 |
+
If true, passes ``stdin`` to both ``stderr`` and ``stdout``.
|
571 |
+
* update : bool, optional
|
572 |
+
If true, will treat input as newly elapsed iterations,
|
573 |
+
i.e. numbers to pass to ``update()``. Note that this is slow
|
574 |
+
(~2e5 it/s) since every input must be decoded as a number.
|
575 |
+
* update_to : bool, optional
|
576 |
+
If true, will treat input as total elapsed iterations,
|
577 |
+
i.e. numbers to assign to ``self.n``. Note that this is slow
|
578 |
+
(~2e5 it/s) since every input must be decoded as a number.
|
579 |
+
* null : bool, optional
|
580 |
+
If true, will discard input (no stdout).
|
581 |
+
* manpath : str, optional
|
582 |
+
Directory in which to install tqdm man pages.
|
583 |
+
* comppath : str, optional
|
584 |
+
Directory in which to place tqdm completion.
|
585 |
+
* log : str, optional
|
586 |
+
CRITICAL|FATAL|ERROR|WARN(ING)|[default: 'INFO']|DEBUG|NOTSET.
|
587 |
+
|
588 |
+
Returns
|
589 |
+
~~~~~~~
|
590 |
+
|
591 |
+
* out : decorated iterator.
|
592 |
+
|
593 |
+
.. code:: python
|
594 |
+
|
595 |
+
class tqdm():
|
596 |
+
def update(self, n=1):
|
597 |
+
"""
|
598 |
+
Manually update the progress bar, useful for streams
|
599 |
+
such as reading files.
|
600 |
+
E.g.:
|
601 |
+
>>> t = tqdm(total=filesize) # Initialise
|
602 |
+
>>> for current_buffer in stream:
|
603 |
+
... ...
|
604 |
+
... t.update(len(current_buffer))
|
605 |
+
>>> t.close()
|
606 |
+
The last line is highly recommended, but possibly not necessary if
|
607 |
+
``t.update()`` will be called in such a way that ``filesize`` will be
|
608 |
+
exactly reached and printed.
|
609 |
+
|
610 |
+
Parameters
|
611 |
+
----------
|
612 |
+
n : int or float, optional
|
613 |
+
Increment to add to the internal counter of iterations
|
614 |
+
[default: 1]. If using float, consider specifying ``{n:.3f}``
|
615 |
+
or similar in ``bar_format``, or specifying ``unit_scale``.
|
616 |
+
|
617 |
+
Returns
|
618 |
+
-------
|
619 |
+
out : bool or None
|
620 |
+
True if a ``display()`` was triggered.
|
621 |
+
"""
|
622 |
+
|
623 |
+
def close(self):
|
624 |
+
"""Cleanup and (if leave=False) close the progressbar."""
|
625 |
+
|
626 |
+
def clear(self, nomove=False):
|
627 |
+
"""Clear current bar display."""
|
628 |
+
|
629 |
+
def refresh(self):
|
630 |
+
"""
|
631 |
+
Force refresh the display of this bar.
|
632 |
+
|
633 |
+
Parameters
|
634 |
+
----------
|
635 |
+
nolock : bool, optional
|
636 |
+
If ``True``, does not lock.
|
637 |
+
If [default: ``False``]: calls ``acquire()`` on internal lock.
|
638 |
+
lock_args : tuple, optional
|
639 |
+
Passed to internal lock's ``acquire()``.
|
640 |
+
If specified, will only ``display()`` if ``acquire()`` returns ``True``.
|
641 |
+
"""
|
642 |
+
|
643 |
+
def unpause(self):
|
644 |
+
"""Restart tqdm timer from last print time."""
|
645 |
+
|
646 |
+
def reset(self, total=None):
|
647 |
+
"""
|
648 |
+
Resets to 0 iterations for repeated use.
|
649 |
+
|
650 |
+
Consider combining with ``leave=True``.
|
651 |
+
|
652 |
+
Parameters
|
653 |
+
----------
|
654 |
+
total : int or float, optional. Total to use for the new bar.
|
655 |
+
"""
|
656 |
+
|
657 |
+
def set_description(self, desc=None, refresh=True):
|
658 |
+
"""
|
659 |
+
Set/modify description of the progress bar.
|
660 |
+
|
661 |
+
Parameters
|
662 |
+
----------
|
663 |
+
desc : str, optional
|
664 |
+
refresh : bool, optional
|
665 |
+
Forces refresh [default: True].
|
666 |
+
"""
|
667 |
+
|
668 |
+
def set_postfix(self, ordered_dict=None, refresh=True, **tqdm_kwargs):
|
669 |
+
"""
|
670 |
+
Set/modify postfix (additional stats)
|
671 |
+
with automatic formatting based on datatype.
|
672 |
+
|
673 |
+
Parameters
|
674 |
+
----------
|
675 |
+
ordered_dict : dict or OrderedDict, optional
|
676 |
+
refresh : bool, optional
|
677 |
+
Forces refresh [default: True].
|
678 |
+
kwargs : dict, optional
|
679 |
+
"""
|
680 |
+
|
681 |
+
@classmethod
|
682 |
+
def write(cls, s, file=sys.stdout, end="\n"):
|
683 |
+
"""Print a message via tqdm (without overlap with bars)."""
|
684 |
+
|
685 |
+
@property
|
686 |
+
def format_dict(self):
|
687 |
+
"""Public API for read-only member access."""
|
688 |
+
|
689 |
+
def display(self, msg=None, pos=None):
|
690 |
+
"""
|
691 |
+
Use ``self.sp`` to display ``msg`` in the specified ``pos``.
|
692 |
+
|
693 |
+
Consider overloading this function when inheriting to use e.g.:
|
694 |
+
``self.some_frontend(**self.format_dict)`` instead of ``self.sp``.
|
695 |
+
|
696 |
+
Parameters
|
697 |
+
----------
|
698 |
+
msg : str, optional. What to display (default: ``repr(self)``).
|
699 |
+
pos : int, optional. Position to ``moveto``
|
700 |
+
(default: ``abs(self.pos)``).
|
701 |
+
"""
|
702 |
+
|
703 |
+
@classmethod
|
704 |
+
@contextmanager
|
705 |
+
def wrapattr(cls, stream, method, total=None, bytes=True, **tqdm_kwargs):
|
706 |
+
"""
|
707 |
+
stream : file-like object.
|
708 |
+
method : str, "read" or "write". The result of ``read()`` and
|
709 |
+
the first argument of ``write()`` should have a ``len()``.
|
710 |
+
|
711 |
+
>>> with tqdm.wrapattr(file_obj, "read", total=file_obj.size) as fobj:
|
712 |
+
... while True:
|
713 |
+
... chunk = fobj.read(chunk_size)
|
714 |
+
... if not chunk:
|
715 |
+
... break
|
716 |
+
"""
|
717 |
+
|
718 |
+
@classmethod
|
719 |
+
def pandas(cls, *targs, **tqdm_kwargs):
|
720 |
+
"""Registers the current `tqdm` class with `pandas`."""
|
721 |
+
|
722 |
+
def trange(*args, **tqdm_kwargs):
|
723 |
+
"""Shortcut for `tqdm(range(*args), **tqdm_kwargs)`."""
|
724 |
+
|
725 |
+
Convenience Functions
|
726 |
+
~~~~~~~~~~~~~~~~~~~~~
|
727 |
+
|
728 |
+
.. code:: python
|
729 |
+
|
730 |
+
def tqdm.contrib.tenumerate(iterable, start=0, total=None,
|
731 |
+
tqdm_class=tqdm.auto.tqdm, **tqdm_kwargs):
|
732 |
+
"""Equivalent of `numpy.ndenumerate` or builtin `enumerate`."""
|
733 |
+
|
734 |
+
def tqdm.contrib.tzip(iter1, *iter2plus, **tqdm_kwargs):
|
735 |
+
"""Equivalent of builtin `zip`."""
|
736 |
+
|
737 |
+
def tqdm.contrib.tmap(function, *sequences, **tqdm_kwargs):
|
738 |
+
"""Equivalent of builtin `map`."""
|
739 |
+
|
740 |
+
Submodules
|
741 |
+
~~~~~~~~~~
|
742 |
+
|
743 |
+
.. code:: python
|
744 |
+
|
745 |
+
class tqdm.notebook.tqdm(tqdm.tqdm):
|
746 |
+
"""IPython/Jupyter Notebook widget."""
|
747 |
+
|
748 |
+
class tqdm.auto.tqdm(tqdm.tqdm):
|
749 |
+
"""Automatically chooses beween `tqdm.notebook` and `tqdm.tqdm`."""
|
750 |
+
|
751 |
+
class tqdm.asyncio.tqdm(tqdm.tqdm):
|
752 |
+
"""Asynchronous version."""
|
753 |
+
@classmethod
|
754 |
+
def as_completed(cls, fs, *, loop=None, timeout=None, total=None,
|
755 |
+
**tqdm_kwargs):
|
756 |
+
"""Wrapper for `asyncio.as_completed`."""
|
757 |
+
|
758 |
+
class tqdm.gui.tqdm(tqdm.tqdm):
|
759 |
+
"""Matplotlib GUI version."""
|
760 |
+
|
761 |
+
class tqdm.tk.tqdm(tqdm.tqdm):
|
762 |
+
"""Tkinter GUI version."""
|
763 |
+
|
764 |
+
class tqdm.rich.tqdm(tqdm.tqdm):
|
765 |
+
"""`rich.progress` version."""
|
766 |
+
|
767 |
+
class tqdm.keras.TqdmCallback(keras.callbacks.Callback):
|
768 |
+
"""Keras callback for epoch and batch progress."""
|
769 |
+
|
770 |
+
class tqdm.dask.TqdmCallback(dask.callbacks.Callback):
|
771 |
+
"""Dask callback for task progress."""
|
772 |
+
|
773 |
+
|
774 |
+
``contrib``
|
775 |
+
+++++++++++
|
776 |
+
|
777 |
+
The ``tqdm.contrib`` package also contains experimental modules:
|
778 |
+
|
779 |
+
- ``tqdm.contrib.itertools``: Thin wrappers around ``itertools``
|
780 |
+
- ``tqdm.contrib.concurrent``: Thin wrappers around ``concurrent.futures``
|
781 |
+
- ``tqdm.contrib.slack``: Posts to `Slack <https://slack.com>`__ bots
|
782 |
+
- ``tqdm.contrib.discord``: Posts to `Discord <https://discord.com>`__ bots
|
783 |
+
- ``tqdm.contrib.telegram``: Posts to `Telegram <https://telegram.org>`__ bots
|
784 |
+
- ``tqdm.contrib.bells``: Automagically enables all optional features
|
785 |
+
|
786 |
+
* ``auto``, ``pandas``, ``slack``, ``discord``, ``telegram``
|
787 |
+
|
788 |
+
Examples and Advanced Usage
|
789 |
+
---------------------------
|
790 |
+
|
791 |
+
- See the `examples <https://github.com/tqdm/tqdm/tree/master/examples>`__
|
792 |
+
folder;
|
793 |
+
- import the module and run ``help()``;
|
794 |
+
- consult the `wiki <https://github.com/tqdm/tqdm/wiki>`__;
|
795 |
+
|
796 |
+
* this has an
|
797 |
+
`excellent article <https://github.com/tqdm/tqdm/wiki/How-to-make-a-great-Progress-Bar>`__
|
798 |
+
on how to make a **great** progressbar;
|
799 |
+
|
800 |
+
- check out the `slides from PyData London <https://tqdm.github.io/PyData2019/slides.html>`__, or
|
801 |
+
- run the |binder-demo|.
|
802 |
+
|
803 |
+
Description and additional stats
|
804 |
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
805 |
+
|
806 |
+
Custom information can be displayed and updated dynamically on ``tqdm`` bars
|
807 |
+
with the ``desc`` and ``postfix`` arguments:
|
808 |
+
|
809 |
+
.. code:: python
|
810 |
+
|
811 |
+
from tqdm import tqdm, trange
|
812 |
+
from random import random, randint
|
813 |
+
from time import sleep
|
814 |
+
|
815 |
+
with trange(10) as t:
|
816 |
+
for i in t:
|
817 |
+
# Description will be displayed on the left
|
818 |
+
t.set_description('GEN %i' % i)
|
819 |
+
# Postfix will be displayed on the right,
|
820 |
+
# formatted automatically based on argument's datatype
|
821 |
+
t.set_postfix(loss=random(), gen=randint(1,999), str='h',
|
822 |
+
lst=[1, 2])
|
823 |
+
sleep(0.1)
|
824 |
+
|
825 |
+
with tqdm(total=10, bar_format="{postfix[0]} {postfix[1][value]:>8.2g}",
|
826 |
+
postfix=["Batch", {"value": 0}]) as t:
|
827 |
+
for i in range(10):
|
828 |
+
sleep(0.1)
|
829 |
+
t.postfix[1]["value"] = i / 2
|
830 |
+
t.update()
|
831 |
+
|
832 |
+
Points to remember when using ``{postfix[...]}`` in the ``bar_format`` string:
|
833 |
+
|
834 |
+
- ``postfix`` also needs to be passed as an initial argument in a compatible
|
835 |
+
format, and
|
836 |
+
- ``postfix`` will be auto-converted to a string if it is a ``dict``-like
|
837 |
+
object. To prevent this behaviour, insert an extra item into the dictionary
|
838 |
+
where the key is not a string.
|
839 |
+
|
840 |
+
Additional ``bar_format`` parameters may also be defined by overriding
|
841 |
+
``format_dict``, and the bar itself may be modified using ``ascii``:
|
842 |
+
|
843 |
+
.. code:: python
|
844 |
+
|
845 |
+
from tqdm import tqdm
|
846 |
+
class TqdmExtraFormat(tqdm):
|
847 |
+
"""Provides a `total_time` format parameter"""
|
848 |
+
@property
|
849 |
+
def format_dict(self):
|
850 |
+
d = super().format_dict
|
851 |
+
total_time = d["elapsed"] * (d["total"] or 0) / max(d["n"], 1)
|
852 |
+
d.update(total_time=self.format_interval(total_time) + " in total")
|
853 |
+
return d
|
854 |
+
|
855 |
+
for i in TqdmExtraFormat(
|
856 |
+
range(9), ascii=" .oO0",
|
857 |
+
bar_format="{total_time}: {percentage:.0f}%|{bar}{r_bar}"):
|
858 |
+
if i == 4:
|
859 |
+
break
|
860 |
+
|
861 |
+
.. code::
|
862 |
+
|
863 |
+
00:00 in total: 44%|0000. | 4/9 [00:00<00:00, 962.93it/s]
|
864 |
+
|
865 |
+
Note that ``{bar}`` also supports a format specifier ``[width][type]``.
|
866 |
+
|
867 |
+
- ``width``
|
868 |
+
|
869 |
+
* unspecified (default): automatic to fill ``ncols``
|
870 |
+
* ``int >= 0``: fixed width overriding ``ncols`` logic
|
871 |
+
* ``int < 0``: subtract from the automatic default
|
872 |
+
|
873 |
+
- ``type``
|
874 |
+
|
875 |
+
* ``a``: ascii (``ascii=True`` override)
|
876 |
+
* ``u``: unicode (``ascii=False`` override)
|
877 |
+
* ``b``: blank (``ascii=" "`` override)
|
878 |
+
|
879 |
+
This means a fixed bar with right-justified text may be created by using:
|
880 |
+
``bar_format="{l_bar}{bar:10}|{bar:-10b}right-justified"``
|
881 |
+
|
882 |
+
Nested progress bars
|
883 |
+
~~~~~~~~~~~~~~~~~~~~
|
884 |
+
|
885 |
+
``tqdm`` supports nested progress bars. Here's an example:
|
886 |
+
|
887 |
+
.. code:: python
|
888 |
+
|
889 |
+
from tqdm.auto import trange
|
890 |
+
from time import sleep
|
891 |
+
|
892 |
+
for i in trange(4, desc='1st loop'):
|
893 |
+
for j in trange(5, desc='2nd loop'):
|
894 |
+
for k in trange(50, desc='3rd loop', leave=False):
|
895 |
+
sleep(0.01)
|
896 |
+
|
897 |
+
For manual control over positioning (e.g. for multi-processing use),
|
898 |
+
you may specify ``position=n`` where ``n=0`` for the outermost bar,
|
899 |
+
``n=1`` for the next, and so on.
|
900 |
+
However, it's best to check if ``tqdm`` can work without manual ``position``
|
901 |
+
first.
|
902 |
+
|
903 |
+
.. code:: python
|
904 |
+
|
905 |
+
from time import sleep
|
906 |
+
from tqdm import trange, tqdm
|
907 |
+
from multiprocessing import Pool, RLock, freeze_support
|
908 |
+
|
909 |
+
L = list(range(9))
|
910 |
+
|
911 |
+
def progresser(n):
|
912 |
+
interval = 0.001 / (n + 2)
|
913 |
+
total = 5000
|
914 |
+
text = f"#{n}, est. {interval * total:<04.2}s"
|
915 |
+
for _ in trange(total, desc=text, position=n):
|
916 |
+
sleep(interval)
|
917 |
+
|
918 |
+
if __name__ == '__main__':
|
919 |
+
freeze_support() # for Windows support
|
920 |
+
tqdm.set_lock(RLock()) # for managing output contention
|
921 |
+
p = Pool(initializer=tqdm.set_lock, initargs=(tqdm.get_lock(),))
|
922 |
+
p.map(progresser, L)
|
923 |
+
|
924 |
+
Note that in Python 3, ``tqdm.write`` is thread-safe:
|
925 |
+
|
926 |
+
.. code:: python
|
927 |
+
|
928 |
+
from time import sleep
|
929 |
+
from tqdm import tqdm, trange
|
930 |
+
from concurrent.futures import ThreadPoolExecutor
|
931 |
+
|
932 |
+
L = list(range(9))
|
933 |
+
|
934 |
+
def progresser(n):
|
935 |
+
interval = 0.001 / (n + 2)
|
936 |
+
total = 5000
|
937 |
+
text = f"#{n}, est. {interval * total:<04.2}s"
|
938 |
+
for _ in trange(total, desc=text):
|
939 |
+
sleep(interval)
|
940 |
+
if n == 6:
|
941 |
+
tqdm.write("n == 6 completed.")
|
942 |
+
tqdm.write("`tqdm.write()` is thread-safe in py3!")
|
943 |
+
|
944 |
+
if __name__ == '__main__':
|
945 |
+
with ThreadPoolExecutor() as p:
|
946 |
+
p.map(progresser, L)
|
947 |
+
|
948 |
+
Hooks and callbacks
|
949 |
+
~~~~~~~~~~~~~~~~~~~
|
950 |
+
|
951 |
+
``tqdm`` can easily support callbacks/hooks and manual updates.
|
952 |
+
Here's an example with ``urllib``:
|
953 |
+
|
954 |
+
**``urllib.urlretrieve`` documentation**
|
955 |
+
|
956 |
+
| [...]
|
957 |
+
| If present, the hook function will be called once
|
958 |
+
| on establishment of the network connection and once after each block read
|
959 |
+
| thereafter. The hook will be passed three arguments; a count of blocks
|
960 |
+
| transferred so far, a block size in bytes, and the total size of the file.
|
961 |
+
| [...]
|
962 |
+
|
963 |
+
.. code:: python
|
964 |
+
|
965 |
+
import urllib, os
|
966 |
+
from tqdm import tqdm
|
967 |
+
urllib = getattr(urllib, 'request', urllib)
|
968 |
+
|
969 |
+
class TqdmUpTo(tqdm):
|
970 |
+
"""Provides `update_to(n)` which uses `tqdm.update(delta_n)`."""
|
971 |
+
def update_to(self, b=1, bsize=1, tsize=None):
|
972 |
+
"""
|
973 |
+
b : int, optional
|
974 |
+
Number of blocks transferred so far [default: 1].
|
975 |
+
bsize : int, optional
|
976 |
+
Size of each block (in tqdm units) [default: 1].
|
977 |
+
tsize : int, optional
|
978 |
+
Total size (in tqdm units). If [default: None] remains unchanged.
|
979 |
+
"""
|
980 |
+
if tsize is not None:
|
981 |
+
self.total = tsize
|
982 |
+
return self.update(b * bsize - self.n) # also sets self.n = b * bsize
|
983 |
+
|
984 |
+
eg_link = "https://caspersci.uk.to/matryoshka.zip"
|
985 |
+
with TqdmUpTo(unit='B', unit_scale=True, unit_divisor=1024, miniters=1,
|
986 |
+
desc=eg_link.split('/')[-1]) as t: # all optional kwargs
|
987 |
+
urllib.urlretrieve(eg_link, filename=os.devnull,
|
988 |
+
reporthook=t.update_to, data=None)
|
989 |
+
t.total = t.n
|
990 |
+
|
991 |
+
Inspired by `twine#242 <https://github.com/pypa/twine/pull/242>`__.
|
992 |
+
Functional alternative in
|
993 |
+
`examples/tqdm_wget.py <https://github.com/tqdm/tqdm/blob/master/examples/tqdm_wget.py>`__.
|
994 |
+
|
995 |
+
It is recommend to use ``miniters=1`` whenever there is potentially
|
996 |
+
large differences in iteration speed (e.g. downloading a file over
|
997 |
+
a patchy connection).
|
998 |
+
|
999 |
+
**Wrapping read/write methods**
|
1000 |
+
|
1001 |
+
To measure throughput through a file-like object's ``read`` or ``write``
|
1002 |
+
methods, use ``CallbackIOWrapper``:
|
1003 |
+
|
1004 |
+
.. code:: python
|
1005 |
+
|
1006 |
+
from tqdm.auto import tqdm
|
1007 |
+
from tqdm.utils import CallbackIOWrapper
|
1008 |
+
|
1009 |
+
with tqdm(total=file_obj.size,
|
1010 |
+
unit='B', unit_scale=True, unit_divisor=1024) as t:
|
1011 |
+
fobj = CallbackIOWrapper(t.update, file_obj, "read")
|
1012 |
+
while True:
|
1013 |
+
chunk = fobj.read(chunk_size)
|
1014 |
+
if not chunk:
|
1015 |
+
break
|
1016 |
+
t.reset()
|
1017 |
+
# ... continue to use `t` for something else
|
1018 |
+
|
1019 |
+
Alternatively, use the even simpler ``wrapattr`` convenience function,
|
1020 |
+
which would condense both the ``urllib`` and ``CallbackIOWrapper`` examples
|
1021 |
+
down to:
|
1022 |
+
|
1023 |
+
.. code:: python
|
1024 |
+
|
1025 |
+
import urllib, os
|
1026 |
+
from tqdm import tqdm
|
1027 |
+
|
1028 |
+
eg_link = "https://caspersci.uk.to/matryoshka.zip"
|
1029 |
+
response = getattr(urllib, 'request', urllib).urlopen(eg_link)
|
1030 |
+
with tqdm.wrapattr(open(os.devnull, "wb"), "write",
|
1031 |
+
miniters=1, desc=eg_link.split('/')[-1],
|
1032 |
+
total=getattr(response, 'length', None)) as fout:
|
1033 |
+
for chunk in response:
|
1034 |
+
fout.write(chunk)
|
1035 |
+
|
1036 |
+
The ``requests`` equivalent is nearly identical:
|
1037 |
+
|
1038 |
+
.. code:: python
|
1039 |
+
|
1040 |
+
import requests, os
|
1041 |
+
from tqdm import tqdm
|
1042 |
+
|
1043 |
+
eg_link = "https://caspersci.uk.to/matryoshka.zip"
|
1044 |
+
response = requests.get(eg_link, stream=True)
|
1045 |
+
with tqdm.wrapattr(open(os.devnull, "wb"), "write",
|
1046 |
+
miniters=1, desc=eg_link.split('/')[-1],
|
1047 |
+
total=int(response.headers.get('content-length', 0))) as fout:
|
1048 |
+
for chunk in response.iter_content(chunk_size=4096):
|
1049 |
+
fout.write(chunk)
|
1050 |
+
|
1051 |
+
**Custom callback**
|
1052 |
+
|
1053 |
+
``tqdm`` is known for intelligently skipping unnecessary displays. To make a
|
1054 |
+
custom callback take advantage of this, simply use the return value of
|
1055 |
+
``update()``. This is set to ``True`` if a ``display()`` was triggered.
|
1056 |
+
|
1057 |
+
.. code:: python
|
1058 |
+
|
1059 |
+
from tqdm.auto import tqdm as std_tqdm
|
1060 |
+
|
1061 |
+
def external_callback(*args, **kwargs):
|
1062 |
+
...
|
1063 |
+
|
1064 |
+
class TqdmExt(std_tqdm):
|
1065 |
+
def update(self, n=1):
|
1066 |
+
displayed = super().update(n)
|
1067 |
+
if displayed:
|
1068 |
+
external_callback(**self.format_dict)
|
1069 |
+
return displayed
|
1070 |
+
|
1071 |
+
``asyncio``
|
1072 |
+
~~~~~~~~~~~
|
1073 |
+
|
1074 |
+
Note that ``break`` isn't currently caught by asynchronous iterators.
|
1075 |
+
This means that ``tqdm`` cannot clean up after itself in this case:
|
1076 |
+
|
1077 |
+
.. code:: python
|
1078 |
+
|
1079 |
+
from tqdm.asyncio import tqdm
|
1080 |
+
|
1081 |
+
async for i in tqdm(range(9)):
|
1082 |
+
if i == 2:
|
1083 |
+
break
|
1084 |
+
|
1085 |
+
Instead, either call ``pbar.close()`` manually or use the context manager syntax:
|
1086 |
+
|
1087 |
+
.. code:: python
|
1088 |
+
|
1089 |
+
from tqdm.asyncio import tqdm
|
1090 |
+
|
1091 |
+
with tqdm(range(9)) as pbar:
|
1092 |
+
async for i in pbar:
|
1093 |
+
if i == 2:
|
1094 |
+
break
|
1095 |
+
|
1096 |
+
Pandas Integration
|
1097 |
+
~~~~~~~~~~~~~~~~~~
|
1098 |
+
|
1099 |
+
Due to popular demand we've added support for ``pandas`` -- here's an example
|
1100 |
+
for ``DataFrame.progress_apply`` and ``DataFrameGroupBy.progress_apply``:
|
1101 |
+
|
1102 |
+
.. code:: python
|
1103 |
+
|
1104 |
+
import pandas as pd
|
1105 |
+
import numpy as np
|
1106 |
+
from tqdm import tqdm
|
1107 |
+
|
1108 |
+
df = pd.DataFrame(np.random.randint(0, 100, (100000, 6)))
|
1109 |
+
|
1110 |
+
# Register `pandas.progress_apply` and `pandas.Series.map_apply` with `tqdm`
|
1111 |
+
# (can use `tqdm.gui.tqdm`, `tqdm.notebook.tqdm`, optional kwargs, etc.)
|
1112 |
+
tqdm.pandas(desc="my bar!")
|
1113 |
+
|
1114 |
+
# Now you can use `progress_apply` instead of `apply`
|
1115 |
+
# and `progress_map` instead of `map`
|
1116 |
+
df.progress_apply(lambda x: x**2)
|
1117 |
+
# can also groupby:
|
1118 |
+
# df.groupby(0).progress_apply(lambda x: x**2)
|
1119 |
+
|
1120 |
+
In case you're interested in how this works (and how to modify it for your
|
1121 |
+
own callbacks), see the
|
1122 |
+
`examples <https://github.com/tqdm/tqdm/tree/master/examples>`__
|
1123 |
+
folder or import the module and run ``help()``.
|
1124 |
+
|
1125 |
+
Keras Integration
|
1126 |
+
~~~~~~~~~~~~~~~~~
|
1127 |
+
|
1128 |
+
A ``keras`` callback is also available:
|
1129 |
+
|
1130 |
+
.. code:: python
|
1131 |
+
|
1132 |
+
from tqdm.keras import TqdmCallback
|
1133 |
+
|
1134 |
+
...
|
1135 |
+
|
1136 |
+
model.fit(..., verbose=0, callbacks=[TqdmCallback()])
|
1137 |
+
|
1138 |
+
Dask Integration
|
1139 |
+
~~~~~~~~~~~~~~~~
|
1140 |
+
|
1141 |
+
A ``dask`` callback is also available:
|
1142 |
+
|
1143 |
+
.. code:: python
|
1144 |
+
|
1145 |
+
from tqdm.dask import TqdmCallback
|
1146 |
+
|
1147 |
+
with TqdmCallback(desc="compute"):
|
1148 |
+
...
|
1149 |
+
arr.compute()
|
1150 |
+
|
1151 |
+
# or use callback globally
|
1152 |
+
cb = TqdmCallback(desc="global")
|
1153 |
+
cb.register()
|
1154 |
+
arr.compute()
|
1155 |
+
|
1156 |
+
IPython/Jupyter Integration
|
1157 |
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
1158 |
+
|
1159 |
+
IPython/Jupyter is supported via the ``tqdm.notebook`` submodule:
|
1160 |
+
|
1161 |
+
.. code:: python
|
1162 |
+
|
1163 |
+
from tqdm.notebook import trange, tqdm
|
1164 |
+
from time import sleep
|
1165 |
+
|
1166 |
+
for i in trange(3, desc='1st loop'):
|
1167 |
+
for j in tqdm(range(100), desc='2nd loop'):
|
1168 |
+
sleep(0.01)
|
1169 |
+
|
1170 |
+
In addition to ``tqdm`` features, the submodule provides a native Jupyter
|
1171 |
+
widget (compatible with IPython v1-v4 and Jupyter), fully working nested bars
|
1172 |
+
and colour hints (blue: normal, green: completed, red: error/interrupt,
|
1173 |
+
light blue: no ETA); as demonstrated below.
|
1174 |
+
|
1175 |
+
|Screenshot-Jupyter1|
|
1176 |
+
|Screenshot-Jupyter2|
|
1177 |
+
|Screenshot-Jupyter3|
|
1178 |
+
|
1179 |
+
The ``notebook`` version supports percentage or pixels for overall width
|
1180 |
+
(e.g.: ``ncols='100%'`` or ``ncols='480px'``).
|
1181 |
+
|
1182 |
+
It is also possible to let ``tqdm`` automatically choose between
|
1183 |
+
console or notebook versions by using the ``autonotebook`` submodule:
|
1184 |
+
|
1185 |
+
.. code:: python
|
1186 |
+
|
1187 |
+
from tqdm.autonotebook import tqdm
|
1188 |
+
tqdm.pandas()
|
1189 |
+
|
1190 |
+
Note that this will issue a ``TqdmExperimentalWarning`` if run in a notebook
|
1191 |
+
since it is not meant to be possible to distinguish between ``jupyter notebook``
|
1192 |
+
and ``jupyter console``. Use ``auto`` instead of ``autonotebook`` to suppress
|
1193 |
+
this warning.
|
1194 |
+
|
1195 |
+
Note that notebooks will display the bar in the cell where it was created.
|
1196 |
+
This may be a different cell from the one where it is used.
|
1197 |
+
If this is not desired, either
|
1198 |
+
|
1199 |
+
- delay the creation of the bar to the cell where it must be displayed, or
|
1200 |
+
- create the bar with ``display=False``, and in a later cell call
|
1201 |
+
``display(bar.container)``:
|
1202 |
+
|
1203 |
+
.. code:: python
|
1204 |
+
|
1205 |
+
from tqdm.notebook import tqdm
|
1206 |
+
pbar = tqdm(..., display=False)
|
1207 |
+
|
1208 |
+
.. code:: python
|
1209 |
+
|
1210 |
+
# different cell
|
1211 |
+
display(pbar.container)
|
1212 |
+
|
1213 |
+
The ``keras`` callback has a ``display()`` method which can be used likewise:
|
1214 |
+
|
1215 |
+
.. code:: python
|
1216 |
+
|
1217 |
+
from tqdm.keras import TqdmCallback
|
1218 |
+
cbk = TqdmCallback(display=False)
|
1219 |
+
|
1220 |
+
.. code:: python
|
1221 |
+
|
1222 |
+
# different cell
|
1223 |
+
cbk.display()
|
1224 |
+
model.fit(..., verbose=0, callbacks=[cbk])
|
1225 |
+
|
1226 |
+
Another possibility is to have a single bar (near the top of the notebook)
|
1227 |
+
which is constantly re-used (using ``reset()`` rather than ``close()``).
|
1228 |
+
For this reason, the notebook version (unlike the CLI version) does not
|
1229 |
+
automatically call ``close()`` upon ``Exception``.
|
1230 |
+
|
1231 |
+
.. code:: python
|
1232 |
+
|
1233 |
+
from tqdm.notebook import tqdm
|
1234 |
+
pbar = tqdm()
|
1235 |
+
|
1236 |
+
.. code:: python
|
1237 |
+
|
1238 |
+
# different cell
|
1239 |
+
iterable = range(100)
|
1240 |
+
pbar.reset(total=len(iterable)) # initialise with new `total`
|
1241 |
+
for i in iterable:
|
1242 |
+
pbar.update()
|
1243 |
+
pbar.refresh() # force print final status but don't `close()`
|
1244 |
+
|
1245 |
+
Custom Integration
|
1246 |
+
~~~~~~~~~~~~~~~~~~
|
1247 |
+
|
1248 |
+
To change the default arguments (such as making ``dynamic_ncols=True``),
|
1249 |
+
simply use built-in Python magic:
|
1250 |
+
|
1251 |
+
.. code:: python
|
1252 |
+
|
1253 |
+
from functools import partial
|
1254 |
+
from tqdm import tqdm as std_tqdm
|
1255 |
+
tqdm = partial(std_tqdm, dynamic_ncols=True)
|
1256 |
+
|
1257 |
+
For further customisation,
|
1258 |
+
``tqdm`` may be inherited from to create custom callbacks (as with the
|
1259 |
+
``TqdmUpTo`` example `above <#hooks-and-callbacks>`__) or for custom frontends
|
1260 |
+
(e.g. GUIs such as notebook or plotting packages). In the latter case:
|
1261 |
+
|
1262 |
+
1. ``def __init__()`` to call ``super().__init__(..., gui=True)`` to disable
|
1263 |
+
terminal ``status_printer`` creation.
|
1264 |
+
2. Redefine: ``close()``, ``clear()``, ``display()``.
|
1265 |
+
|
1266 |
+
Consider overloading ``display()`` to use e.g.
|
1267 |
+
``self.frontend(**self.format_dict)`` instead of ``self.sp(repr(self))``.
|
1268 |
+
|
1269 |
+
Some submodule examples of inheritance:
|
1270 |
+
|
1271 |
+
- `tqdm/notebook.py <https://github.com/tqdm/tqdm/blob/master/tqdm/notebook.py>`__
|
1272 |
+
- `tqdm/gui.py <https://github.com/tqdm/tqdm/blob/master/tqdm/gui.py>`__
|
1273 |
+
- `tqdm/tk.py <https://github.com/tqdm/tqdm/blob/master/tqdm/tk.py>`__
|
1274 |
+
- `tqdm/contrib/slack.py <https://github.com/tqdm/tqdm/blob/master/tqdm/contrib/slack.py>`__
|
1275 |
+
- `tqdm/contrib/discord.py <https://github.com/tqdm/tqdm/blob/master/tqdm/contrib/discord.py>`__
|
1276 |
+
- `tqdm/contrib/telegram.py <https://github.com/tqdm/tqdm/blob/master/tqdm/contrib/telegram.py>`__
|
1277 |
+
|
1278 |
+
Dynamic Monitor/Meter
|
1279 |
+
~~~~~~~~~~~~~~~~~~~~~
|
1280 |
+
|
1281 |
+
You can use a ``tqdm`` as a meter which is not monotonically increasing.
|
1282 |
+
This could be because ``n`` decreases (e.g. a CPU usage monitor) or ``total``
|
1283 |
+
changes.
|
1284 |
+
|
1285 |
+
One example would be recursively searching for files. The ``total`` is the
|
1286 |
+
number of objects found so far, while ``n`` is the number of those objects which
|
1287 |
+
are files (rather than folders):
|
1288 |
+
|
1289 |
+
.. code:: python
|
1290 |
+
|
1291 |
+
from tqdm import tqdm
|
1292 |
+
import os.path
|
1293 |
+
|
1294 |
+
def find_files_recursively(path, show_progress=True):
|
1295 |
+
files = []
|
1296 |
+
# total=1 assumes `path` is a file
|
1297 |
+
t = tqdm(total=1, unit="file", disable=not show_progress)
|
1298 |
+
if not os.path.exists(path):
|
1299 |
+
raise IOError("Cannot find:" + path)
|
1300 |
+
|
1301 |
+
def append_found_file(f):
|
1302 |
+
files.append(f)
|
1303 |
+
t.update()
|
1304 |
+
|
1305 |
+
def list_found_dir(path):
|
1306 |
+
"""returns os.listdir(path) assuming os.path.isdir(path)"""
|
1307 |
+
listing = os.listdir(path)
|
1308 |
+
# subtract 1 since a "file" we found was actually this directory
|
1309 |
+
t.total += len(listing) - 1
|
1310 |
+
# fancy way to give info without forcing a refresh
|
1311 |
+
t.set_postfix(dir=path[-10:], refresh=False)
|
1312 |
+
t.update(0) # may trigger a refresh
|
1313 |
+
return listing
|
1314 |
+
|
1315 |
+
def recursively_search(path):
|
1316 |
+
if os.path.isdir(path):
|
1317 |
+
for f in list_found_dir(path):
|
1318 |
+
recursively_search(os.path.join(path, f))
|
1319 |
+
else:
|
1320 |
+
append_found_file(path)
|
1321 |
+
|
1322 |
+
recursively_search(path)
|
1323 |
+
t.set_postfix(dir=path)
|
1324 |
+
t.close()
|
1325 |
+
return files
|
1326 |
+
|
1327 |
+
Using ``update(0)`` is a handy way to let ``tqdm`` decide when to trigger a
|
1328 |
+
display refresh to avoid console spamming.
|
1329 |
+
|
1330 |
+
Writing messages
|
1331 |
+
~~~~~~~~~~~~~~~~
|
1332 |
+
|
1333 |
+
This is a work in progress (see
|
1334 |
+
`#737 <https://github.com/tqdm/tqdm/issues/737>`__).
|
1335 |
+
|
1336 |
+
Since ``tqdm`` uses a simple printing mechanism to display progress bars,
|
1337 |
+
you should not write any message in the terminal using ``print()`` while
|
1338 |
+
a progressbar is open.
|
1339 |
+
|
1340 |
+
To write messages in the terminal without any collision with ``tqdm`` bar
|
1341 |
+
display, a ``.write()`` method is provided:
|
1342 |
+
|
1343 |
+
.. code:: python
|
1344 |
+
|
1345 |
+
from tqdm.auto import tqdm, trange
|
1346 |
+
from time import sleep
|
1347 |
+
|
1348 |
+
bar = trange(10)
|
1349 |
+
for i in bar:
|
1350 |
+
# Print using tqdm class method .write()
|
1351 |
+
sleep(0.1)
|
1352 |
+
if not (i % 3):
|
1353 |
+
tqdm.write("Done task %i" % i)
|
1354 |
+
# Can also use bar.write()
|
1355 |
+
|
1356 |
+
By default, this will print to standard output ``sys.stdout``. but you can
|
1357 |
+
specify any file-like object using the ``file`` argument. For example, this
|
1358 |
+
can be used to redirect the messages writing to a log file or class.
|
1359 |
+
|
1360 |
+
Redirecting writing
|
1361 |
+
~~~~~~~~~~~~~~~~~~~
|
1362 |
+
|
1363 |
+
If using a library that can print messages to the console, editing the library
|
1364 |
+
by replacing ``print()`` with ``tqdm.write()`` may not be desirable.
|
1365 |
+
In that case, redirecting ``sys.stdout`` to ``tqdm.write()`` is an option.
|
1366 |
+
|
1367 |
+
To redirect ``sys.stdout``, create a file-like class that will write
|
1368 |
+
any input string to ``tqdm.write()``, and supply the arguments
|
1369 |
+
``file=sys.stdout, dynamic_ncols=True``.
|
1370 |
+
|
1371 |
+
A reusable canonical example is given below:
|
1372 |
+
|
1373 |
+
.. code:: python
|
1374 |
+
|
1375 |
+
from time import sleep
|
1376 |
+
import contextlib
|
1377 |
+
import sys
|
1378 |
+
from tqdm import tqdm
|
1379 |
+
from tqdm.contrib import DummyTqdmFile
|
1380 |
+
|
1381 |
+
|
1382 |
+
@contextlib.contextmanager
|
1383 |
+
def std_out_err_redirect_tqdm():
|
1384 |
+
orig_out_err = sys.stdout, sys.stderr
|
1385 |
+
try:
|
1386 |
+
sys.stdout, sys.stderr = map(DummyTqdmFile, orig_out_err)
|
1387 |
+
yield orig_out_err[0]
|
1388 |
+
# Relay exceptions
|
1389 |
+
except Exception as exc:
|
1390 |
+
raise exc
|
1391 |
+
# Always restore sys.stdout/err if necessary
|
1392 |
+
finally:
|
1393 |
+
sys.stdout, sys.stderr = orig_out_err
|
1394 |
+
|
1395 |
+
def some_fun(i):
|
1396 |
+
print("Fee, fi, fo,".split()[i])
|
1397 |
+
|
1398 |
+
# Redirect stdout to tqdm.write() (don't forget the `as save_stdout`)
|
1399 |
+
with std_out_err_redirect_tqdm() as orig_stdout:
|
1400 |
+
# tqdm needs the original stdout
|
1401 |
+
# and dynamic_ncols=True to autodetect console width
|
1402 |
+
for i in tqdm(range(3), file=orig_stdout, dynamic_ncols=True):
|
1403 |
+
sleep(.5)
|
1404 |
+
some_fun(i)
|
1405 |
+
|
1406 |
+
# After the `with`, printing is restored
|
1407 |
+
print("Done!")
|
1408 |
+
|
1409 |
+
Redirecting ``logging``
|
1410 |
+
~~~~~~~~~~~~~~~~~~~~~~~
|
1411 |
+
|
1412 |
+
Similar to ``sys.stdout``/``sys.stderr`` as detailed above, console ``logging``
|
1413 |
+
may also be redirected to ``tqdm.write()``.
|
1414 |
+
|
1415 |
+
Warning: if also redirecting ``sys.stdout``/``sys.stderr``, make sure to
|
1416 |
+
redirect ``logging`` first if needed.
|
1417 |
+
|
1418 |
+
Helper methods are available in ``tqdm.contrib.logging``. For example:
|
1419 |
+
|
1420 |
+
.. code:: python
|
1421 |
+
|
1422 |
+
import logging
|
1423 |
+
from tqdm import trange
|
1424 |
+
from tqdm.contrib.logging import logging_redirect_tqdm
|
1425 |
+
|
1426 |
+
LOG = logging.getLogger(__name__)
|
1427 |
+
|
1428 |
+
if __name__ == '__main__':
|
1429 |
+
logging.basicConfig(level=logging.INFO)
|
1430 |
+
with logging_redirect_tqdm():
|
1431 |
+
for i in trange(9):
|
1432 |
+
if i == 4:
|
1433 |
+
LOG.info("console logging redirected to `tqdm.write()`")
|
1434 |
+
# logging restored
|
1435 |
+
|
1436 |
+
Monitoring thread, intervals and miniters
|
1437 |
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
1438 |
+
|
1439 |
+
``tqdm`` implements a few tricks to increase efficiency and reduce overhead.
|
1440 |
+
|
1441 |
+
- Avoid unnecessary frequent bar refreshing: ``mininterval`` defines how long
|
1442 |
+
to wait between each refresh. ``tqdm`` always gets updated in the background,
|
1443 |
+
but it will display only every ``mininterval``.
|
1444 |
+
- Reduce number of calls to check system clock/time.
|
1445 |
+
- ``mininterval`` is more intuitive to configure than ``miniters``.
|
1446 |
+
A clever adjustment system ``dynamic_miniters`` will automatically adjust
|
1447 |
+
``miniters`` to the amount of iterations that fit into time ``mininterval``.
|
1448 |
+
Essentially, ``tqdm`` will check if it's time to print without actually
|
1449 |
+
checking time. This behaviour can be still be bypassed by manually setting
|
1450 |
+
``miniters``.
|
1451 |
+
|
1452 |
+
However, consider a case with a combination of fast and slow iterations.
|
1453 |
+
After a few fast iterations, ``dynamic_miniters`` will set ``miniters`` to a
|
1454 |
+
large number. When iteration rate subsequently slows, ``miniters`` will
|
1455 |
+
remain large and thus reduce display update frequency. To address this:
|
1456 |
+
|
1457 |
+
- ``maxinterval`` defines the maximum time between display refreshes.
|
1458 |
+
A concurrent monitoring thread checks for overdue updates and forces one
|
1459 |
+
where necessary.
|
1460 |
+
|
1461 |
+
The monitoring thread should not have a noticeable overhead, and guarantees
|
1462 |
+
updates at least every 10 seconds by default.
|
1463 |
+
This value can be directly changed by setting the ``monitor_interval`` of
|
1464 |
+
any ``tqdm`` instance (i.e. ``t = tqdm.tqdm(...); t.monitor_interval = 2``).
|
1465 |
+
The monitor thread may be disabled application-wide by setting
|
1466 |
+
``tqdm.tqdm.monitor_interval = 0`` before instantiation of any ``tqdm`` bar.
|
1467 |
+
|
1468 |
+
|
1469 |
+
Merch
|
1470 |
+
-----
|
1471 |
+
|
1472 |
+
You can buy `tqdm branded merch <https://tqdm.github.io/merch>`__ now!
|
1473 |
+
|
1474 |
+
Contributions
|
1475 |
+
-------------
|
1476 |
+
|
1477 |
+
|GitHub-Commits| |GitHub-Issues| |GitHub-PRs| |OpenHub-Status| |GitHub-Contributions| |CII Best Practices|
|
1478 |
+
|
1479 |
+
All source code is hosted on `GitHub <https://github.com/tqdm/tqdm>`__.
|
1480 |
+
Contributions are welcome.
|
1481 |
+
|
1482 |
+
See the
|
1483 |
+
`CONTRIBUTING <https://github.com/tqdm/tqdm/blob/master/CONTRIBUTING.md>`__
|
1484 |
+
file for more information.
|
1485 |
+
|
1486 |
+
Developers who have made significant contributions, ranked by *SLoC*
|
1487 |
+
(surviving lines of code,
|
1488 |
+
`git fame <https://github.com/casperdcl/git-fame>`__ ``-wMC --excl '\.(png|gif|jpg)$'``),
|
1489 |
+
are:
|
1490 |
+
|
1491 |
+
==================== ======================================================== ==== ================================
|
1492 |
+
Name ID SLoC Notes
|
1493 |
+
==================== ======================================================== ==== ================================
|
1494 |
+
Casper da Costa-Luis `casperdcl <https://github.com/casperdcl>`__ ~80% primary maintainer |Gift-Casper|
|
1495 |
+
Stephen Larroque `lrq3000 <https://github.com/lrq3000>`__ ~9% team member
|
1496 |
+
Martin Zugnoni `martinzugnoni <https://github.com/martinzugnoni>`__ ~3%
|
1497 |
+
Daniel Ecer `de-code <https://github.com/de-code>`__ ~2%
|
1498 |
+
Richard Sheridan `richardsheridan <https://github.com/richardsheridan>`__ ~1%
|
1499 |
+
Guangshuo Chen `chengs <https://github.com/chengs>`__ ~1%
|
1500 |
+
Helio Machado `0x2b3bfa0 <https://github.com/0x2b3bfa0>`__ ~1%
|
1501 |
+
Kyle Altendorf `altendky <https://github.com/altendky>`__ <1%
|
1502 |
+
Noam Yorav-Raphael `noamraph <https://github.com/noamraph>`__ <1% original author
|
1503 |
+
Matthew Stevens `mjstevens777 <https://github.com/mjstevens777>`__ <1%
|
1504 |
+
Hadrien Mary `hadim <https://github.com/hadim>`__ <1% team member
|
1505 |
+
Mikhail Korobov `kmike <https://github.com/kmike>`__ <1% team member
|
1506 |
+
==================== ======================================================== ==== ================================
|
1507 |
+
|
1508 |
+
Ports to Other Languages
|
1509 |
+
~~~~~~~~~~~~~~~~~~~~~~~~
|
1510 |
+
|
1511 |
+
A list is available on
|
1512 |
+
`this wiki page <https://github.com/tqdm/tqdm/wiki/tqdm-ports>`__.
|
1513 |
+
|
1514 |
+
|
1515 |
+
LICENCE
|
1516 |
+
-------
|
1517 |
+
|
1518 |
+
Open Source (OSI approved): |LICENCE|
|
1519 |
+
|
1520 |
+
Citation information: |DOI|
|
1521 |
+
|
1522 |
+
|README-Hits| (Since 19 May 2016)
|
1523 |
+
|
1524 |
+
.. |Logo| image:: https://tqdm.github.io/img/logo.gif
|
1525 |
+
.. |Screenshot| image:: https://tqdm.github.io/img/tqdm.gif
|
1526 |
+
.. |Video| image:: https://tqdm.github.io/img/video.jpg
|
1527 |
+
:target: https://tqdm.github.io/video
|
1528 |
+
.. |Slides| image:: https://tqdm.github.io/img/slides.jpg
|
1529 |
+
:target: https://tqdm.github.io/PyData2019/slides.html
|
1530 |
+
.. |Merch| image:: https://tqdm.github.io/img/merch.jpg
|
1531 |
+
:target: https://tqdm.github.io/merch
|
1532 |
+
.. |Build-Status| image:: https://img.shields.io/github/actions/workflow/status/tqdm/tqdm/test.yml?branch=master&label=tqdm&logo=GitHub
|
1533 |
+
:target: https://github.com/tqdm/tqdm/actions/workflows/test.yml
|
1534 |
+
.. |Coverage-Status| image:: https://img.shields.io/coveralls/github/tqdm/tqdm/master?logo=coveralls
|
1535 |
+
:target: https://coveralls.io/github/tqdm/tqdm
|
1536 |
+
.. |Branch-Coverage-Status| image:: https://codecov.io/gh/tqdm/tqdm/branch/master/graph/badge.svg
|
1537 |
+
:target: https://codecov.io/gh/tqdm/tqdm
|
1538 |
+
.. |Codacy-Grade| image:: https://app.codacy.com/project/badge/Grade/3f965571598f44549c7818f29cdcf177
|
1539 |
+
:target: https://www.codacy.com/gh/tqdm/tqdm/dashboard
|
1540 |
+
.. |CII Best Practices| image:: https://bestpractices.coreinfrastructure.org/projects/3264/badge
|
1541 |
+
:target: https://bestpractices.coreinfrastructure.org/projects/3264
|
1542 |
+
.. |GitHub-Status| image:: https://img.shields.io/github/tag/tqdm/tqdm.svg?maxAge=86400&logo=github&logoColor=white
|
1543 |
+
:target: https://github.com/tqdm/tqdm/releases
|
1544 |
+
.. |GitHub-Forks| image:: https://img.shields.io/github/forks/tqdm/tqdm.svg?logo=github&logoColor=white
|
1545 |
+
:target: https://github.com/tqdm/tqdm/network
|
1546 |
+
.. |GitHub-Stars| image:: https://img.shields.io/github/stars/tqdm/tqdm.svg?logo=github&logoColor=white
|
1547 |
+
:target: https://github.com/tqdm/tqdm/stargazers
|
1548 |
+
.. |GitHub-Commits| image:: https://img.shields.io/github/commit-activity/y/tqdm/tqdm.svg?logo=git&logoColor=white
|
1549 |
+
:target: https://github.com/tqdm/tqdm/graphs/commit-activity
|
1550 |
+
.. |GitHub-Issues| image:: https://img.shields.io/github/issues-closed/tqdm/tqdm.svg?logo=github&logoColor=white
|
1551 |
+
:target: https://github.com/tqdm/tqdm/issues?q=
|
1552 |
+
.. |GitHub-PRs| image:: https://img.shields.io/github/issues-pr-closed/tqdm/tqdm.svg?logo=github&logoColor=white
|
1553 |
+
:target: https://github.com/tqdm/tqdm/pulls
|
1554 |
+
.. |GitHub-Contributions| image:: https://img.shields.io/github/contributors/tqdm/tqdm.svg?logo=github&logoColor=white
|
1555 |
+
:target: https://github.com/tqdm/tqdm/graphs/contributors
|
1556 |
+
.. |GitHub-Updated| image:: https://img.shields.io/github/last-commit/tqdm/tqdm/master.svg?logo=github&logoColor=white&label=pushed
|
1557 |
+
:target: https://github.com/tqdm/tqdm/pulse
|
1558 |
+
.. |Gift-Casper| image:: https://img.shields.io/badge/dynamic/json.svg?color=ff69b4&label=gifts%20received&prefix=%C2%A3&query=%24..sum&url=https%3A%2F%2Fcaspersci.uk.to%2Fgifts.json
|
1559 |
+
:target: https://cdcl.ml/sponsor
|
1560 |
+
.. |Versions| image:: https://img.shields.io/pypi/v/tqdm.svg
|
1561 |
+
:target: https://tqdm.github.io/releases
|
1562 |
+
.. |PyPI-Downloads| image:: https://img.shields.io/pypi/dm/tqdm.svg?label=pypi%20downloads&logo=PyPI&logoColor=white
|
1563 |
+
:target: https://pepy.tech/project/tqdm
|
1564 |
+
.. |Py-Versions| image:: https://img.shields.io/pypi/pyversions/tqdm.svg?logo=python&logoColor=white
|
1565 |
+
:target: https://pypi.org/project/tqdm
|
1566 |
+
.. |Conda-Forge-Status| image:: https://img.shields.io/conda/v/conda-forge/tqdm.svg?label=conda-forge&logo=conda-forge
|
1567 |
+
:target: https://anaconda.org/conda-forge/tqdm
|
1568 |
+
.. |Snapcraft| image:: https://img.shields.io/badge/snap-install-82BEA0.svg?logo=snapcraft
|
1569 |
+
:target: https://snapcraft.io/tqdm
|
1570 |
+
.. |Docker| image:: https://img.shields.io/badge/docker-pull-blue.svg?logo=docker&logoColor=white
|
1571 |
+
:target: https://hub.docker.com/r/tqdm/tqdm
|
1572 |
+
.. |Libraries-Rank| image:: https://img.shields.io/librariesio/sourcerank/pypi/tqdm.svg?logo=koding&logoColor=white
|
1573 |
+
:target: https://libraries.io/pypi/tqdm
|
1574 |
+
.. |Libraries-Dependents| image:: https://img.shields.io/librariesio/dependent-repos/pypi/tqdm.svg?logo=koding&logoColor=white
|
1575 |
+
:target: https://github.com/tqdm/tqdm/network/dependents
|
1576 |
+
.. |OpenHub-Status| image:: https://www.openhub.net/p/tqdm/widgets/project_thin_badge?format=gif
|
1577 |
+
:target: https://www.openhub.net/p/tqdm?ref=Thin+badge
|
1578 |
+
.. |awesome-python| image:: https://awesome.re/mentioned-badge.svg
|
1579 |
+
:target: https://github.com/vinta/awesome-python
|
1580 |
+
.. |LICENCE| image:: https://img.shields.io/pypi/l/tqdm.svg
|
1581 |
+
:target: https://raw.githubusercontent.com/tqdm/tqdm/master/LICENCE
|
1582 |
+
.. |DOI| image:: https://img.shields.io/badge/DOI-10.5281/zenodo.595120-blue.svg
|
1583 |
+
:target: https://doi.org/10.5281/zenodo.595120
|
1584 |
+
.. |binder-demo| image:: https://mybinder.org/badge_logo.svg
|
1585 |
+
:target: https://mybinder.org/v2/gh/tqdm/tqdm/master?filepath=DEMO.ipynb
|
1586 |
+
.. |Screenshot-Jupyter1| image:: https://tqdm.github.io/img/jupyter-1.gif
|
1587 |
+
.. |Screenshot-Jupyter2| image:: https://tqdm.github.io/img/jupyter-2.gif
|
1588 |
+
.. |Screenshot-Jupyter3| image:: https://tqdm.github.io/img/jupyter-3.gif
|
1589 |
+
.. |README-Hits| image:: https://caspersci.uk.to/cgi-bin/hits.cgi?q=tqdm&style=social&r=https://github.com/tqdm/tqdm&l=https://tqdm.github.io/img/favicon.png&f=https://tqdm.github.io/img/logo.gif
|
1590 |
+
:target: https://caspersci.uk.to/cgi-bin/hits.cgi?q=tqdm&a=plot&r=https://github.com/tqdm/tqdm&l=https://tqdm.github.io/img/favicon.png&f=https://tqdm.github.io/img/logo.gif&style=social
|
llmeval-env/lib/python3.10/site-packages/tqdm-4.66.4.dist-info/WHEEL
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Wheel-Version: 1.0
|
2 |
+
Generator: bdist_wheel (0.43.0)
|
3 |
+
Root-Is-Purelib: true
|
4 |
+
Tag: py3-none-any
|
5 |
+
|
llmeval-env/lib/python3.10/site-packages/tqdm-4.66.4.dist-info/entry_points.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
[console_scripts]
|
2 |
+
tqdm = tqdm.cli:main
|