question_id
int64
59.5M
79.7M
creation_date
stringdate
2020-01-01 00:00:00
2025-06-28 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
79,663,680
2025-6-12
https://stackoverflow.com/questions/79663680/plotting-the-components-of-a-general-solution-returned-by-sympy
I have written the following code that returns a solution. import sympy as sym import numpy as np z = sym.Symbol('z') e = sym.Symbol('e') f = sym.Function('f') edo = sym.diff(f(z), z , 2) + e * f(z) soln = sym.dsolve(edo, f(z)) print(soln.rhs) the above code returns: C1*exp(-z*sqrt(-e)) + C2*exp(z*sqrt(-e)) I want to be able to access the elements of this 'soln.rhs' directly and plot them. I could copy and paste the results, but I want to do something like: plot(x, soln.rhs[0]) which would plot exp(-z*sqrt(-e)) The reason for this is that I am analysing many different types of ODE, some of which return solutions which are combinations of Airy functions like C1*airyai(-e + z) + C2*airybi(-e + z) Does anyone know how to access the elements of the general solution? I have looked through the documentation and nothing really pops out.
What you are looking for is the args attribute of a Sympy's symbolic expression. For example: print(soln.rhs.args[0]) # C1*exp(-z*sqrt(-e)) print(soln.rhs.args[1]) # C2*exp(z*sqrt(-e)) You might also want to insert appropriate values for the integrations constants by using the subs method: C1, C2 = symbols("C1, C2") soln.rhs.subs({C1: 2, C2: 3}) # random numeric values to show how to do it. Then, you can plot it: # plot the solution for z in [0, 10] plot(soln.rhs.subs({C1: 2, C2: 3}), (z, 0, 10))
1
2
79,663,153
2025-6-12
https://stackoverflow.com/questions/79663153/correct-way-to-embed-and-bundle-python-in-c-to-avoid-modulenotfounderror-enc
I am trying to embed Python inside my C++ DLL. The idea is that the DLL, once distributed, should be sufficient and not rely on other installations and downloads. Interestingly, the below "sort of" works, only in my solution directory, since that is where my vcpkg_installed is. How can I make my DLL not be required to be near vcpkg_installed? Code py_wrap.cpp (the DLL): void assertPyInit() { if (!Py_IsInitialized()) { PyConfig config; PyConfig_InitPythonConfig(&config); // Get the executable path instead of current working directory wchar_t exePath[MAX_PATH]; GetModuleFileNameW(NULL, exePath, MAX_PATH); // Remove the executable name to get the directory std::wstring exeDir = exePath; size_t lastSlash = exeDir.find_last_of(L"\\"); if (lastSlash != std::wstring::npos) { exeDir = exeDir.substr(0, lastSlash); } // Now build Python path relative to executable location std::wstring pythonHome = exeDir + L"\\..\\..\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3"; // Resolve the full path to eliminate .. references wchar_t resolvedPath[MAX_PATH]; GetFullPathNameW(pythonHome.c_str(), MAX_PATH, resolvedPath, NULL); pythonHome = resolvedPath; std::wstring pythonLib = pythonHome + L"\\Lib"; std::wstring pythonSitePackages = pythonLib + L"\\site-packages"; std::wstring pythonDLLs = pythonHome + L"\\DLLs"; // Set the Python home directory PyConfig_SetString(&config, &config.home, pythonHome.c_str()); // Set the module search paths std::wstring pythonPathEnv = pythonLib + L";" + pythonSitePackages + L";" + pythonDLLs; PyConfig_SetString(&config, &config.pythonpath_env, pythonPathEnv.c_str()); PyStatus status = Py_InitializeFromConfig(&config); PyConfig_Clear(&config); if (PyStatus_Exception(status)) { PyErr_Print(); return; } PyRun_SimpleString("import sys"); PyRun_SimpleString("sys.path.append(\".\")"); } } void MY_DLL pyPrint(const char* message) { assertPyInit(); PyObject* pyStr = PyUnicode_FromString(message); if (pyStr) { PyObject* builtins = PyEval_GetBuiltins(); PyObject* printFunc = PyDict_GetItemString(builtins, "print"); if (printFunc && PyCallable_Check(printFunc)) { PyObject* args = PyTuple_Pack(1, pyStr); PyObject_CallObject(printFunc, args); Py_DECREF(args); } Py_DECREF(pyStr); } } DLLTester.cpp (client app): #include <iostream> #include "py_wrap.h" int main() { std::cout << "Hello\n"; pyPrint("Hello from python:D !"); } File structure and IO PS D: \RedactedLabs\Dev > ls AsyncDLLMQL\x64\Release\ Directory: D: \RedactedLabs\Dev\AsyncDLLMQL\x64\Release Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 6/11/2025 10:48 PM 476672 AsyncDLLMQL.dll -a---- 6/11/2025 10:48 PM 4297 AsyncDLLMQL.exp -a---- 6/11/2025 10:26 PM 7752 AsyncDLLMQL.lib -a---- 6/11/2025 10:48 PM 7376896 AsyncDLLMQL.pdb -a---- 6/11/2025 10:48 PM 12288 DLLTester.exe -a---- 6/11/2025 10:48 PM 790528 DLLTester.pdb -a---- 6/6/2025 4: 39 PM 56320 python3.dll -a---- 6/6/2025 4: 39 PM 7273984 python312.dll PS D: \RedactedLabs\Dev > ls AsyncDLLMQL\x64\Release\ Directory: D: \RedactedLabs\Dev\AsyncDLLMQL\x64\Release Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 6/11/2025 10:48 PM 476672 AsyncDLLMQL.dll -a---- 6/11/2025 10:48 PM 4297 AsyncDLLMQL.exp -a---- 6/11/2025 10:26 PM 7752 AsyncDLLMQL.lib -a---- 6/11/2025 10:48 PM 7376896 AsyncDLLMQL.pdb -a---- 6/11/2025 10:48 PM 12288 DLLTester.exe -a---- 6/11/2025 10:48 PM 790528 DLLTester.pdb -a---- 6/6/2025 4: 39 PM 56320 python3.dll -a---- 6/6/2025 4: 39 PM 7273984 python312.dll PS D: \RedactedLabs\Dev > ls .\scraps\ Directory: D: \RedactedLabs\Dev\scraps Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 6/11/2025 10:48 PM 476672 AsyncDLLMQL.dll -a---- 6/11/2025 10:48 PM 4297 AsyncDLLMQL.exp -a---- 6/11/2025 10:26 PM 7752 AsyncDLLMQL.lib -a---- 6/11/2025 10:48 PM 7376896 AsyncDLLMQL.pdb -a---- 6/11/2025 10:48 PM 12288 DLLTester.exe -a---- 6/11/2025 10:48 PM 790528 DLLTester.pdb -a---- 6/6/2025 4: 39 PM 56320 python3.dll -a---- 6/6/2025 4: 39 PM 7273984 python312.dll PS D: \RedactedLabs\Dev > .\AsyncDLLMQL\x64\Release\DLLTester.exe Hello Hello from python: D ! PS D: \RedactedLabs\Dev > .\scraps\DLLTester.exe Hello Python path configuration: PYTHONHOME= 'D:\RedactedLabs\vcpkg_installed\x64-windows-static-md\x64-windows-static-md\tools\python3' PYTHONPATH = 'D:\RedactedLabs\vcpkg_installed\x64-windows-static-md\x64-windows-static-md\tools\python3\Lib;D:\RedactedLabs\vcpkg_installed\x64-windows-static-md\x64-windows-static-md\tools\python3\Lib\site-packages;D:\RedactedLabs\vcpkg_installed\x64-windows-static-md\x64-windows-static-md\tools\python3\DLLs' program name = 'python' isolated = 0 environment = 1 user site = 1 safe_path = 0 import site = 1 is in build tree = 0 stdlib dir = 'D:\RedactedLabs\vcpkg_installed\x64-windows-static-md\x64-windows-static-md\tools\python3\Lib' sys._base_executable = 'D:\\RedactedLabs\\Dev\\scraps\\DLLTester.exe' sys.base_prefix = 'D:\\RedactedLabs\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3' sys.base_exec_prefix = 'D:\\RedactedLabs\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3' sys.platlibdir = 'DLLs' sys.executable = 'D:\\RedactedLabs\\Dev\\scraps\\DLLTester.exe' sys.prefix = 'D:\\RedactedLabs\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3' sys.exec_prefix = 'D:\\RedactedLabs\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3' sys.path = [ 'D:\\RedactedLabs\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3\\Lib', 'D:\\RedactedLabs\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3\\Lib\\site-packages', 'D:\\RedactedLabs\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3\\DLLs', 'D:\\RedactedLabs\\Dev\\scraps\\python312.zip', 'D:\\RedactedLabs\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3\\DLLs', 'D:\\RedactedLabs\\vcpkg_installed\\x64-windows-static-md\\x64-windows-static-md\\tools\\python3\\Lib', 'D:\\RedactedLabs\\Dev\\scraps', ] ModuleNotFoundError: No module named 'encodings' Finally, my linker settings and other useful info: Platform: Windows IDE: Visual Studio Use Vcpkg Manifest: Yes Target triplet: "x64-windows-static-md" Additional include directories: $(SolutionDir)vcpkg_installed\x64-windows-static-md\x64-windows-static-md\include\python3.12 Additional Library Directories: $(VcpkgInstalledDir)\x64-windows-static-md\lib
CPython needs its standard library to exist at the PYTHONHOME directory, (which you can override in the config). If you need a standalone python interpreter to just parse python syntax (for game scripting) then you can use other implementations of python like RustPython or IronPython or JYThon or pocketpy, but you won't be able to use any C/C++ libraries like numpy. If security is not a concern then you can do as popular apps like blender do and package the interpreter with your application and set PYTHONHOME before initializing the interpreter with Py_InitializeFromConfig as in the following structure. . └── app_root/ ├── app.exe ├── python311.dll └── python/ ├── DLLs/ │ ├── _ctypes.pyd │ └── ... ├── Lib/ │ ├── site_packages/ │ │ └── numpy, etc... │ ├── os.py │ └── ... └── bin (optional)/ ├── python311.dll (see note below) └── python.exe (see note below) If you place python311.dll in a subdirectory then you must do extra steps to set the PATH to its location or use LoadLibrary instead and not link it directly. packing python.exe allows your user to install extra libraries using pip by doing python.exe -m pip install ... keep in mind that users will be able to modify everything, so if security is a concern then my recommendation is not to use Python ... but you can create a custom importer and static link everything and get it to work. note that a game did this before as well as create custom byte-code for the interpreter and was still completely reverse engineered, so this is not really fool-proof.
1
2
79,663,207
2025-6-12
https://stackoverflow.com/questions/79663207/matplotlib-slider-not-working-when-plot-is-updated-in-separate-function
I am using matplotlib's Slider to do a simple dynamic plot of a sine curve. I want to change the frequency using a slider. The code looks like this: import numpy as np import matplotlib.pyplot as plt from matplotlib.widgets import Slider x = np.linspace(400, 800, 400) fig, ax = plt.subplots() fig.subplots_adjust(bottom=0.25) mod = 1. lambda_plot, = ax.plot(x, np.sin(mod*x*2*np.pi/500)) ax_mod_slider = fig.add_axes([0.3, 0.1, 0.5, 0.04]) mod_slider = Slider( ax = ax_mod_slider, label = "modulation", valmin = 0, valmax = 20, valinit = 1., orientation = "horizontal") def update_mod(val): mod = mod_slider.val redraw() fig.canvas.draw_idle() def redraw(): lambda_plot.set_ydata(np.sin(mod*x*2*np.pi/500)) mod_slider.on_changed(update_mod) plt.show() This does only work if I put the code in redraw() directly in update_mod(), like this: def update_mod(val): mod = mod_slider.val lambda_plot.set_ydata(np.sin(mod*x*2*np.pi/500)) fig.canvas.draw_idle() Why can I not call another function for changing the plot? In this example, I could put it all into update_mod(), but as I do more complicated calculations, I thought it might be good to be able to split update_mod() into separate functions.
In the redraw() function: def redraw(): lambda_plot.set_ydata(np.sin(mod * x * 2 * np.pi / 500)) You're using the variable mod, but this mod is not defined in the local scope of redraw(), nor is it properly declared as global or passed as an argument. So Python looks for it in the outer scope and finds the mod = 1. you defined at the beginning of the script which never changes. When update_mod() changes mod, it does so in its local scope, and that doesn't affect the mod that redraw() sees. Just pass the mod value and it should work properly: def redraw(mod): lambda_plot.set_ydata(np.sin(mod * x * 2 * np.pi / 500))
2
2
79,664,893
2025-6-13
https://stackoverflow.com/questions/79664893/average-distance-of-any-point-of-a-disk-to-its-boundary
I was wondering what was the average distance "d" of any point in a ball to its boundary (not specifically the closest point). I made a simple monte carlo computation using Muller's method to get a uniform distribution - see linked answer : Python Uniform distribution of points on 4 dimensional sphere : def sample_within(radius, dim, nb_points=1, ): # Gaussian-distributed points x = np.random.normal(size=(nb_points, dim)) # Normalize to unit length (points on the surface of the n-sphere) x /= np.linalg.norm(x, axis=1)[:, np.newaxis] # Random radii with distribution ~ r^(1/n) => uniform volumic distribution in the ball r = np.random.rand(nb_points) ** (1/dim) # Scale by radius points = radius * x * r[:, np.newaxis] return points Note: I also used a "stupid" sampling in a box and discarding what ever points lies out of the sphere, everything I say in this post is true wiht both sampling method as they produce the same outcome. And using (the same method) : def sample_on_border(radius, dim:int, nb_points:int): # Gaussian-distributed points x = np.random.normal(size=(nb_points, dim)) # Normalize to unit length (points on the surface of the dim-sphere) x /= np.linalg.norm(x, axis=1)[:, np.newaxis] # Scale by radius points = radius * x return points The distances are calculated using (which I have checked manually): def calc_average_distance(radius: float, dim: int, nb_in_points: int, nb_border_points: int)->float: border_points = sample_on_border(radius, dim, nb_border_points) in_points = sample_within(radius, dim, nb_in_points) # in_points = border_points #specific case R=1 distances = np.sqrt(np.sum((border_points[:, np.newaxis, :] - in_points[np.newaxis, :, :])**2, axis=2)) average_distance_to_border = np.mean(distances) return average_distance_to_border From now on, I will speak in dimension 2 (disk) and with a radius of 1. Note: It is easy to demonstrate that the average distance between 2 points of the circle is 4/pi (1.27), and using only sampled points on the border in my Monte Carlo computation yield the same result. Also, the average distance between the center of the circle and the points of the circle is obviously 1 (since R=1). Therefore, I would expect the average distance d to be between 1 and 1.27 (it is just intuition though). The computed average distance d is : d = 1.13182 ± 0.00035 (with 5 sigma confidence) which is indeed between 1 and 1.27 (seems to even be the average). Now, I was also interest to find if there is an analytical solution to this problem, and I found the paper "The Average Directional Distance To The Boundary Of A Ball Or Disk" at https://www.emis.de/journals/AMEN/2024/AMEN-A230614.pdf This paper adress the problem in n-dimension, but its first section specifically deal with the case in dimension 2. Note: For the average distance between two points on the unit circle (i.e., when R=1), the paper gives the result 2/pi . However, unless I am mistaken, the correct value should be 4/π as previously stated and this can even be derived from the paper formula by taking r=R=1. However, I cannot find any (other) mistake in the paper and the analytical solution (which I have checked several times) yields 8/3pi so around 0.85 which is obvisouly different from the 1.13 I get when using my monte carlo code (and also not between 1 and 4/pi). Also, just to be sure, I tried: sampling non-uniformly (using uniform distribution to sample radius instead of using normal distribution), and I get another answer (1.088) but still not 0.85. or by sampling only along the segment [0, 1] (exploiting the problem symmetries) which gives 1.13 again or 1.088 again (depending if uniformly distributed or not). Using polar coordinates in 2D, also gives 1.13. Can anyone help me identify a mistake in my reasoning/code ? Edit: I failed to mention, that this post https://math.stackexchange.com/questions/875011/average-distance-from-a-point-in-a-ball-to-a-point-on-its-boundary gives 6/5 for a 3d-ball which is the result I get by Monte-Carlo but differs from the paper's result (3/4). I am too bad at math to understand their results, but by pure annalogy I figured that 1.13.. is in fact 32/(9pi) which is the value of the double integral between 0 and 1 and 0 and 2pi of r*sqrt(r**2 - 2rcos(theta)+1). Thanks
The error you are doing is to compute the average distance between a point within the disk and a point within the border. This is not what the article describes. The article describe the average distance between a point within a disk and the intersection of the border for a direction This may seem to be the same. Since choosing Φ randomly results into choosing randomly a point in the border. But it is not. Because the border point is this way not uniformly chosen on the border (it is the direction Φ from the inner point that is uniformly chosen) Trying to maintain as much as I can your code, using the same sample_within and sample_on_border (to ease generalization to other dimensions), what would be the distance d(P,Φ), if not (as you did) just ||P-(cos Φ, sin Φ)| P being points returned by sample_within, and (cos Φ, sin Φ) the points returned by sample_on_border (that is a strange way to choose Φ, sure, rather than just a uniform choosing in [0, 2π], but that way I reuse sample_on_border, and that way, it may be usable for more than 2D). To make notation lighter, and in that spirit "I am not just choosing an angle, but a point on a unit sphere", let's call u=(cos Φ, sin Φ) One way to figure out that distance is to say that distance is α such as P+α.u is on the unit circle (or sphere for higher dimensions), with α≥0 (α<0 is also counted, but with direction -Φ) That is ||P+α.u|| = 1 That is <P+αu, P+αu> = 1 That is <P,P>+2α<P,u>+α²<u,u> Since u is on the unit sphere, <u,u>=1. <P,P>=ρ² if ρ is the distance to the center of P. That is just a 2nd degree equation to solve with unknown α α² + 2<P,U>α + ρ²-1 = 0 Δ = 4<P,U>² + 4 - 4ρ² So we have two solutions α = (-2<P,U> ± √Δ)/2 = ±√(<P,U>²+1-ρ²) - <P,U> Since 1-ρ² is positive, √(<P,U>²+1-ρ²) is slightly more that |<P,U>|, so is positive only for solution α = √(<P,U>²+1-ρ²) - <P,U> So, that is your distance. Let's revise now your last function with that distance in mind def calc_average_distance2(dim: int, nb_in_points: int, nb_border_points: int)->float: # Just rename `border_points` to `direction_point`, since it is a direction we are choosing here # That is what I called u direction_point = sample_on_border(1, dim, nb_border_points) # And this, what I called P in_points = sample_within(1, dim, nb_in_points) # I mention later why I tried with this line #in_points = sample_on_border(1, dim, nb_in_points) # <P,u> scal = (direction_point[:,None,:] * in_points[None,:,:]).sum(axis=2) ρ = np.linalg.norm(in_points, axis=1)[None,:] dist = np.sqrt(scal**2+1-ρ**2)-scal # And this comment of yours, I realize only now, while copying this to # stack overflow, has the exact same purpose, probably that my own commented line # in_points = border_points #specific case R=1 return dist.mean() Because I did my reasoning on a unit sphere, and was too lazy to redo it with a radius, I removed the radius parameter and use a fixed 1 The result on my machine, with 5000 points × 1000 directions is 0.8490785129084816 which is close enough to 8/(3π)≈0.8488263631567751 And if I uncomment the line chosing in_points on the border, to compute what is called P₁ in the article (I suspect that was also the role of your commented line), I get 0.6368585804371621 which is close enough to 2/π≈0.6366197723675814 So, I am not 100% positive that this is what you wanted to do, and if that is what you meant by "average distance of any points of a dist to its boundary". Reason why I used conditional about you wanting to use this code to generalize to higher dimensions. But I am 100% positive that this is what the article is doing. A simpler version Earlier, I said "α>0. -α case will be counted when direction will be -Φ". But after all, what would happen if we used both α solution for average? That is just counting average for both Φ and -Φ (or u, and -u, if with think "direction" rather than "angle"). So, it would have worked without that α>0 restriction, and counting both α solution, negative and positive in the average. But of course it is |α| that is the distance then But what is the average of both |α| solutions? Well, it is It is ½ ( |√(<P,U>²+1-ρ²) - <P,U>| + |-√(<P,U>²+1-ρ²) - <P,U>| ) And since the first is clearly positive (as we already said) and second clearly negative, absolute values can be resolved directly. It is ½ ( √(<P,U>²+1-ρ²) - <P,U> + √(<P,U>²+1-ρ²) + <P,U>| ) = √(<P,U>²+1-ρ²) So, not a huge simplification, sure. that mean that we can drop the -<P,u> term dist = np.sqrt(scal**2+1-ρ**2)-scal ⇒ dist = np.sqrt(scal**2+1-ρ**2) works as well
2
1
79,664,331
2025-6-13
https://stackoverflow.com/questions/79664331/why-does-numpy-assign-different-string-dtypes-when-mixing-types-in-np-array
I'm trying to understand how NumPy determines the dtype when creating an array with mixed types. I noticed that the inferred dtype for strings can vary significantly depending on the order and type of elements in the list. print(np.array([1.0, True, 'is'])) # Output: array(['1.0', 'True', 'is'], dtype='<U32') print(np.array(['1.0', True, 'is'])) # Output: array(['1.0', 'True', 'is'], dtype='<U5') print(np.array(['1.0', 'True', 'is'])) # Output: array(['1.0', 'True', 'is'], dtype='<U4') I understand that NumPy upcasts everything to a common type, usually the most general one, and that strings tend to dominate. But why does the resulting dtype (<U32, <U5, <U4) differ so much when the content looks almost the same? Specifically: Why does np.array([1.0, True, 'is']) result in <U32? What determines the exact length in the dtype (e.g., <U4 vs <U5)? Is there a consistent rule for how NumPy infers the dtype and string length in such cases?
Looking at your array contents and the NumPy type promotion rules, I guess the following applies: For some purposes NumPy will promote almost any other datatype to strings. This applies to array creation or concatenation. This leaves us with the question about the string lengths. For the complete array, we need to choose a string length so that all values can be represented without loss of information. In your examples, the contents have the following data types: Strings ('is', 'True', '1.0'): for these, NumPy just needs to reserve their actual length (thus, if there are multiple strings in the same array, the maximum length of all occurring strings). Booleans (True): for converting them to a string, NumPy reserves a string length of 5, since all possible converted values are 'True' (length 4) and 'False' (length 5). We can easily verify this: np.array(True).astype(str) # >>> array('True', dtype='<U5') Floats (1.0): for converting them to a string, NumPy reserves a string length of 32. I assume this is for round-trip safety (i.e. to get the exact same value when converting the string representation back to float). I would have expected that a shorter length (somewhere between 20 and 30) should be enough, but maybe 32, a power of 2, has been chosen for better memory alignment properties. In any case, again, we can verify this: np.array(1.).astype(str) # >>> array('1.0', dtype='<U32') Now to your examples: np.array([1.0, True, 'is']): we have a float 1.0 (→ string length 32), a boolean True (→ string length 5), and a string 'is' of length 2: The maximum length to represent all values is 32. np.array(['1.0', True, 'is']): we have a string '1.0' of length 3, a boolean True (→ string length 5), and a string 'is' of length 2: The maximum length to represent all values is 5. np.array(['1.0', 'True', 'is']): we have a string '1.0' of length 3, a string 'True' of length 4, and a string 'is' of length 2: The maximum length to represent all values is 4.
2
1
79,664,217
2025-6-13
https://stackoverflow.com/questions/79664217/arbitrary-stencil-slicing-in-numpy
Is there a simple syntax for creating references to an arbitrary number of neighbouring array elements in numpy? The syntax is relatively straightforward when the number of neighbours is hard-coded. A stencil width of three for example is import numpy as np x = np.arange(8) # Hard-coded stencil width of 3 x_neighbours = ( x[ :-2] x[ 1:-1] x[ 2: ] ) However, my attempt at arbitrary width stencils is not particularly readable nStencil = 3 x_neighbours = ( x[indexStart:indexStop] for indexStart, indexStop in zip( (None, *range(1,nStencil)), (*range(1-nStencil,0), None), ) ) Is there a better approach?
I'd recommend using sliding_window_view. Change: nStencil = 3 x_neighbours = ( x[indexStart:indexStop] for indexStart, indexStop in zip( (None, *range(1,nStencil)), (*range(1-nStencil,0), None), ) ) To: nStencil = 3 sliding_view = sliding_window_view(x, nStencil) x_neighbours = tuple(sliding_view[:, i] for i in range(nStencil))
1
3
79,667,364
2025-6-16
https://stackoverflow.com/questions/79667364/how-to-create-different-type-for-class-variable-and-instance-variable
I want to explain to Pyright that my variables in class and instance have different types. I managed to overload __get__ method to achieve this, but now Pyright complains about initialization of instances (see last line): Literal[1] is not assignable to Field[int] My code: import typing as ty from dataclasses import dataclass class Field[InstanceT]: def __init__(self, default: InstanceT): self.default = default self.first = True def __get__(self, obj, owner): if self.first: self.first = False return self.default return self if ty.TYPE_CHECKING: @ty.overload def __get__(self, obj: None, owner: type) -> ty.Self: ... @ty.overload def __get__(self, obj: object, owner: type) -> InstanceT: ... @dataclass class Model: field: Field[int] = Field(0) if __name__ == "__main__": # It`s fine class_field: Field = Model.field instance_field: int = Model().field assert isinstance(class_field, Field) assert isinstance(instance_field, int) # Literal[1] is not assignable to field[int] obj = Model(field=1) Asserts are true, but Pyright complains.
You want to have a data-descriptor, such it needs a __set__ method. You will get an error depending on the signature of __set__, you want it to accept your generic. A working example could look like this, the instance value will be stored on the objects _field attribute, see the __set_name__ magic, you could of course also store in the Field and not on the instance. I am not sure about your self.first logic - so you might want to change some parts. import typing as ty from dataclasses import dataclass class Field[InstanceT]: def __init__(self, default: InstanceT): self.default = default self.first = True @ty.overload def __get__(self, obj: None, owner: type) -> ty.Self: ... @ty.overload def __get__(self, obj: object, owner: type) -> InstanceT: ... def __get__(self, obj, owner): if self.first: self.first = False return self.default if obj is None: # <-- called on class return self return getattr(obj, self._name, self.default) # <-- called on instance def __set_name__(self, owner, name): self._name = "_" +name def __set__(self, obj, value: InstanceT): setattr(obj, self._name, value) @dataclass class Model: field: Field[int] = Field(0) if __name__ == "__main__": class_field = Model.field reveal_type(class_field) # Field[int] model = Model() instance_field: int = model.field reveal_type(instance_field) # int assert isinstance(class_field, Field) assert isinstance(instance_field, int) # Literal[1] is not assignable to field[int] obj = Model(field=1) # OK Model(field="1") # Error
1
0
79,667,071
2025-6-16
https://stackoverflow.com/questions/79667071/why-numpy-fabs-is-much-slower-than-abs
This Python 3.12.7 script with Numpy 2.2.4: import numpy as np, timeit as ti a = np.random.rand(1000).astype(np.float32) print(f'Minimum, median and maximum execution time in us:') for fun in ('np.fabs(a)', 'np.abs(a)'): t = 10**6 * np.array(ti.repeat(stmt=fun, setup=fun, globals=globals(), number=1, repeat=999)) print(f'{fun:20} {np.amin(t):8,.3f} {np.median(t):8,.3f} {np.amax(t):8,.3f}') produces these results on AMD Ryzen 7 3800X: Minimum, median and maximum execution time in us: np.fabs(a) 1.813 1.843 4.929 np.abs(a) 0.781 0.811 1.463 indicating that np.fabs() is more than 2x slower than np.abs(), despite the latter having more functionality. What is the reason?
fabs always calls the C math library function of the same name (or in this case, the fabsf type variation). Therefore the operation cannot be inlined or vectorized. I have verified this by injecting a custom version using LD_PRELOAD. I've checked the source code of glibc (which just calls __builtin_fabsf(x)) and looked at the generated code with godbolt. I see no complexity (e.g. NaN handling or math exceptions) that differentiate fabs from the fastest, simple abs implementation. I assume numpy always call the library in the f… functions out of principle. Similar effects can be expected from fmin and fmax, for example (though here the implementation is actually more complex than plain min() and max(). From looking at old bug reports, it appears that the performance difference (and the signaling math behavior) are actually platform-dependent. On MIPS, abs() used to be (or still is?) slower as the compiler could not turn the generic C code into a simple bit mask due to the potential of floating point exceptions (which fabs should never do). This also highlights that implementing fabs without compiler support is not as simple as writing x < 0 ? -x : x because -0 has to be differentiated from +0, which the comparison does not. np.abs seems to do the right thing on modern numpy, even though a simple C version would not but I have not investigated where and how the behavior is implemented for floating point types.
2
4
79,666,955
2025-6-16
https://stackoverflow.com/questions/79666955/how-to-access-code-in-different-files-inside-the-main-app
I have a rather large app.py file, so I'd like to take a nav panel out and store it in a separate file. I'm not sure how to access code from a different file and include it in the main app. The app.py file: import page from shiny import reactive, render, req, ui from shiny.express import input, ui with ui.navset_hidden(id="selected_navset_tab"): # Homepage with ui.nav_panel("Welcome", value="page_home"): with ui.card(): # Variable calculated ui.input_selectize( "filler", "Filler", ["list", "of", "items", "here"], multiple=False ) The other file, page.py: from shiny import reactive, render, req, ui from shiny.express import input, ui with ui.nav_panel("Page I want to put in the main app", value="other_page"): @render.express def filler_text(): "Filler text" How can I import the other_page nav panel to show as part of the navset_tab in the main app.py file, without actually putting it in the code?
You can wrap the other_page nav panel into a function and use this function inside the main app. However, doing this results in an empty additional nav panel and you need to apply express.expressify. This is because otherwise only the return value of the nav function is shown (None), where we instead want to display the result of each line (see the Programming UI section within the docs for more details). page.py from shiny import render from shiny.express import ui, expressify @expressify def otherNav(): with ui.nav_panel("Page I want to put in the main app", value="other_page"): @render.express def filler_text(): "Filler text" app.py import page from shiny import ui from shiny.express import input, ui with ui.navset_tab(id="selected_navset_tab"): # Homepage with ui.nav_panel("Welcome", value="page_home"): with ui.card(): # Variable calculated ui.input_selectize( "filler", "Filler", ["list", "of", "items", "here"], multiple=False ) # other Nav page.otherNav()
1
1
79,669,389
2025-6-17
https://stackoverflow.com/questions/79669389/pandas-subtract-one-dataframe-from-another-if-match
I have a pandas dataframe that has information about total attendance for schools grouped by School, District, Program, Grade, and Month #. The data looks like the following (df): School District Program Grade Month Count 123 456 A 9-12 10 100 123 456 B 9-12 10 95 321 654 A 9-12 10 23 321 456 A 7-8 10 40 Some of the counts are inflated and need to be reduced based on the data from another dataframe (ToSubtract): School District Program Grade Month Count 123 456 A 9-12 10 10 321 654 A 9-12 10 8 Both dataframes are already grouped so there will be no duplicate grouping. Subtracting ToSubtract from df will result in: School District Program Grade Month Count X 123 456 A 9-12 10 90 * 123 456 B 9-12 10 95 321 654 A 9-12 10 15 * 321 456 A 7-8 10 40 (X column to be marked with * to indicate value was modified). df has a lot more entries for all of the other schools, districts, months, etc. I was looking into df.sub() but it looks like the elements have to be lined up. My other idea was to use df.iterrows() to go through each row of df and check if there is a corresponding row in ToSubtract but this seems very inefficient. What would be the best way to subtract one dataframe from another, matching several columns
A possible solution: cols = ['School', 'District', 'Program', 'Grade', 'Month'] df1.set_index(cols).sub(df2.set_index(cols), fill_value=0).reset_index() First, it defines the list cols with the columns used to match records (School, District, Program, Grade, and Month). Then, it sets these columns as the index for both dataframes using set_index, allowing pandas to align rows based on these keys. It uses sub to subtract df2 from df1, with fill_value=0 ensuring that missing rows in either dataframe are treated as zeros (i.e., subtracting from zero or not subtracting at all). Finally, it resets the index back to columns with reset_index. Output: School District Program Grade Month Count 0 123 456 A 9-12 10 90.0 1 123 456 B 9-12 10 95.0 2 321 456 A 7-8 10 40.0 3 321 654 A 9-12 10 15.0
1
3
79,669,116
2025-6-17
https://stackoverflow.com/questions/79669116/creating-a-character-grid-from-a-table-output-is-inverted
I was given a problem that takes in a table of information (grid coordinates and characters) and asked to place them into a table to make a message. I've spent time working on some Python to work it out but my answer is coming out upside down and I can't figure out why. import requests from bs4 import BeautifulSoup webpage_response = requests.get('https://docs.google.com/document/d/e/2PACX-1vRMx5YQlZNa3ra8dYYxmv-QIQ3YJe8tbI3kqcuC7lQiZm-CSEznKfN_HYNSpoXcZIV3Y_O3YoUB1ecq/pub') webpage = webpage_response.content #print(webpage) soup = BeautifulSoup(webpage, 'html.parser') table = soup.find('table') #print(doc_table) rows = table.find_all('tr') data = [] for row in rows[1:]: cells = row.find_all('td') character = cells[1].text.strip() x = cells[0].text.strip() y = cells[2].text.strip() data.append((character,x,y)) #print(data) max_x = max(x for _,x,_ in data) max_y = max(y for _, _,y in data) #print(max_x) #print(max_y) grid = [[' ' for _ in range(int(max_x) + 1)] for _ in range(int(max_y) + 1)] for character, x, y in data: grid[int(y)][int(x)] = character for row in grid: print(''.join(row)) This is my code. It should print out a grid of characters that look like a capital F but its upside down. I think the problem arises when I populate the grid but I'm not sure how to fix it.
The likely cause of the upside-down output is how you are indexing grid[int(y)]. In typical grid representations for display: grid[0] often represents the top row. grid[len(grid)-1] often represents the bottom row. However, if your data's y-coordinates are such that y=0 is the bottom of your intended image (like a standard Cartesian coordinate system where y increases upwards), then directly using grid[int(y)] will place y=0 at grid[0], which is the top. This effectively flips the image vertically. To fix this, you need to adjust the y-coordinate when placing the character into the grid. I tried to improve your program a bit. Please check the Google Drive link I shared and feel free to give your feedback. ->> Kindly take a look Explanation of the fix (adjusted_y = max_y - y): Let's assume your y-coordinates in the data range from 0 to max_y. When y = max_y (the "top" of your desired "F" in the original coordinate system), adjusted_y will be max_y - max_y = 0. This places the character in grid[0], which is the very first (top) row of your printed grid. When y = 0 (the "bottom" of your desired "F" in the original coordinate system), adjusted_y will be max_y - 0 = max_y. This places the character in grid[max_y], which is the very last (bottom) row of your printed grid. This transformation effectively reverses the order of the y-coordinates, causing the image to be displayed correctly (right-side up).
1
0
79,668,894
2025-6-17
https://stackoverflow.com/questions/79668894/plotly-express-line-plots-number-of-datapoint-insted-of-actual-data
Plotly express line plots number of data points instead of actual data on the y axis. Now whats confusing me is that it only does this on one machine, its works perfectly fine on 3 others. I made sure python and the packages i use are of the same version. I tried changing the data type to int and float, which only does what i described above. However if i changed the data type to string it no longer showed the number of data point, but it scaled all the values that aren't 0 to 1(other devices did the same on datatype str). df['Machine_Status'] = df['Machine_Status'].astype(int) df = df.dropna(subset=['Machine_Status']) df =df.reset_index(drop=True) df[['Timestamp', 'Machine_Status']].to_csv('machine_status_timeframe_output.csv', index=False) fig = px.line( df, x='Timestamp', y='Machine_Status', title=f'{machine_id} Activity on {selected_date} (6 AM to 10 PM)', labels={'Timestamp': 'Time', 'Machine_Status': 'Activity'}, line_shape='hv', ) Timestamp Machine_Status 2025-06-16 06:00:04 0 2025-06-16 06:00:09 0 2025-06-16 06:00:14 3 2025-06-16 06:00:18 0 2025-06-16 06:00:23 0 2025-06-16 06:00:28 3 2025-06-16 06:00:33 0 2025-06-16 06:00:38 0 2025-06-16 06:00:43 3 2025-06-16 06:00:48 0 2025-06-16 06:00:53 0 2025-06-16 06:00:58 3 2025-06-16 06:01:03 0 2025-06-16 06:01:08 0 2025-06-16 06:01:13 0 2025-06-16 06:01:18 0 2025-06-16 06:01:23 0 2025-06-16 06:01:28 0 2025-06-16 06:01:33 0 2025-06-16 06:01:38 0 2025-06-16 06:01:43 0 2025-06-16 06:01:48 0 2025-06-16 06:01:53 0 Current behavior: Expected behavior:
The symptom you’re seeing is exactly the bug that crept into Plotly v6.0.0: in Jupyter the first release of 6.x sometimes mistakes a perfectly-normal numeric series for “categorical” and silently switches the underlying trace to the default aggregation “count”. Instead of plotting the values it therefore shows how many rows share each x-value – the bar/line height becomes “number of points”, not the data you passed in. Why you only see it on one PC The machine that mis-behaves almost certainly has plotly == 6.0.0 installed. The other three computers are still on the stable 5.x line (or have been upgraded to ≥ 6.0.1), so the bug never triggers. You can confirm this in one line: import plotly, sys print(plotly.__version__, sys.executable) How to fix it 1. Upgrade to a patched 6.x release (recommended) pip install -U plotly # installs the latest 6.1.2 at the time of writing # or pin to the first patched build: # pip install plotly==6.0.1 2. …or temporarily roll back to the last 5.x version pip install "plotly<6" # e.g. 5.24.1 or 5.23.0 Either option removes the aggregation glitch; your original code works unchanged.
1
1
79,670,825
2025-6-18
https://stackoverflow.com/questions/79670825/index-array-using-boolean-mask-with-a-broadcasted-dimension
I have two arrays a1 and a2 of shape M1xM2x...xMPx3 and N1xN2x...xNQx3, and a mask m (boolean array) of shape M1xM2x...xMPxN1xN2x...xNQ (you can assume that a1 and a2 are at least 2D). import numpy as np np.random.seed(0) a1 = np.random.rand(4, 5, 3) a2 = np.random.rand(6, 3) m = np.random.rand(4, 5, 6) >= 0.7 I would like to two arrays b1 and b2 of shape Mx3 that contains the values of a1 and a2 where the mask m is true. My current way is to repeat a1 and a2 to obtain arrays of size M1xM2x...xMPxN1xN2x...xNQx3 and then index them using m: a1_shape, a2_shape = a1.shape[:-1], a2.shape[:-1] t_shape = a1_shape + a2_shape + (3,) a1 = a1[..., None, :].repeat(np.prod(a2_shape), axis=-2).reshape(t_shape) a2 = a2[None, ..., :].repeat(np.prod(a1_shape), axis=0).reshape(t_shape) b1, b2 = a1[m, :], a2[m, :] This requires creating two huge array of size M1xM2x...xMPxN1xN2x...xNQ which can be problematic. Is there a way to obtain the same b1 and b2 without having to create such huge intermediate array?
A possible solution: idx = np.where(m) P = len(a1.shape) - 1 Q = len(a2.shape) - 1 a1_idx = idx[:P] a2_idx = idx[P:P+Q] b1 = a1[a1_idx] if P > 0 else np.tile(a1, (np.sum(m), 1)) b2 = a2[a2_idx] if Q > 0 else np.tile(a2, (np.sum(m), 1)) This solution begins by using np.where(m) to obtain a tuple of index arrays corresponding to all True locations in m. The number of spatial dimensions for a1 is determined using P = len(a1.shape) - 1, where .shape provides the dimensions and len counts them; Q is determined similarly for a2. These index arrays are then partitioned: the first P arrays (idx[:P]) are assigned to a1_idx and the subsequent Q arrays (idx[P:P+Q]) are assigned to a2_idx. For edge cases where an array might lack spatial dimensions (P=0 or Q=0), it handles this by using np.tile, which repeats the original vector np.sum(m) times (counting True values), ensuring the output shape matches the expected (K, 3) format.
1
1
79,670,289
2025-6-18
https://stackoverflow.com/questions/79670289/type-hinting-a-dynamic-asymmetric-class-property
I'm currently working on removing all the type errors from my Python project in VS Code. Assume you have a Python class that has an asymmetric property. It takes any kind of iterable and converts it into a custom list subclass with additional methods. class ObservableList(list): """list with events for change, insert, remove, ...""" # ... class MyFrame: @property def my_list(self) -> ObservableList: return self._my_list @my_list.setter def my_list(self, val: typing.Iterable): self._my_list = ObservableList(val) # ... my_frame = MyFrame() VS Code (i.e. Pyright) will correctly deduce that: you can set my_frame.my_list using any iterable, and my_frame.my_list will always be an ObservableList. Now, let's assume that there is no actual @property code. Instead, the property is implemented dynamically using __setattr__ and __getattr__. (Context: We're talking about a GUI generator which provides automatic bindings.) I want to use a declaration on class level to tell the typechecker that this property exists, without actually spelling it out: class MyFrame(AutoFrame): my_list: ??? # ... (AutoFrame provides the __getattr__/ __setattr__ implementation.) What can I put in place of the ??? to make this work? When using ObservableList, Pyright complains when I assign a plain list to the property. When using Iterable or list, Pyright complains when I access ObservableList-specific methods. Same when using list | ObservableList: Pyright assumes that the property could return both, and list misses the additional methods. Re: close vote: the linked question's answer basically boils down to going back to square one (implementing the property explicitly). The point of using AutoFrame is specifically to get rid of that repetitive boilerplate code. Just imagine doing this for a GUI frame with a dozen bound controls. I can live with a single added declaration line but not much more.
You can use the Any-Trick, see my answer on how the four different cases work, but in short, if you type it as: from typing import Any, reveal_type class ObservableList(list): """list with events for change, insert, remove, ...""" def foo(self) -> str: ... class MyFrame: my_list: ObservableList | Any MyFrame().my_list = [2, 3] # OK value: str = MyFrame().my_list.foo() # OK, add :str to avoid value being str | Any Of course that will have the downside that any assignment, will be okay as well: MyFrame().my_list = "no error" so you need to be aware of that. Alternatively, you can implement the property behind a if TYPE_CHECKING block OR in a .pyi file: no-runtime influence, correct-typing, but boilerplate: class MyFrame: if TYPE_CHECKING: @property def my_list(self) -> ObservableList: ... @my_list.setter def my_list(self, val: typing.Iterable): ...
1
2
79,673,580
2025-6-20
https://stackoverflow.com/questions/79673580/why-do-i-get-a-division-by-zero-error-when-plotting-inverse-function-on-secondar
I get a division by zero error when I run the following python script. import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=(10, 10)) freq = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.6, 1.8, 2.0] amp = [0.0, 0.0, 0.0, 0.0, 0.0, 0.012, 0.031, 0.074, 0.082, 0.084, 0.080, 0.078, 0.072, 0.059, 0.039, 0.019, 0.010] ax.semilogx(freq, amp, marker='s', color='purple') plt.xlim(0.1, 10) plt.xlabel('Frequency (Hz)') plt.ylabel('Crest Loss (ft)') ax.set(title='Fundamental Frequency') ax.grid() ax.grid(which="minor", color="0.9") def forward(x): return 1 / x def inverse(x): return 1 / x secax = ax.secondary_xaxis('top', functions=(forward, inverse)) secax.set_xlabel('Period (s)') plt.show() The plot seems to output correctly, but I don't know why I get a division by zero error when all the data plotted on the x axis is greater than zero.
I think it is just a "feature" of the secondary axis. In the third example in the docs here, they explicitly have a check to avoid divide by zero warnings: fig, ax = plt.subplots(layout='constrained') x = np.arange(0.02, 1, 0.02) np.random.seed(19680801) y = np.random.randn(len(x)) ** 2 ax.loglog(x, y) ax.set_xlabel('f [Hz]') ax.set_ylabel('PSD') ax.set_title('Random spectrum') def one_over(x): """Vectorized 1/x, treating x==0 manually""" x = np.array(x, float) near_zero = np.isclose(x, 0) x[near_zero] = np.inf x[~near_zero] = 1 / x[~near_zero] return x # the function "1/x" is its own inverse inverse = one_over secax = ax.secondary_xaxis('top', functions=(one_over, inverse)) secax.set_xlabel('period [s]') plt.show() Also, see the code example on the secondary_axis docs page here where the same thing is done.
1
1
79,673,208
2025-6-20
https://stackoverflow.com/questions/79673208/dynamically-added-class-attributes-with-getter-setter-methods-in-a-loop-in-pytho
I'm trying to dynamically assign class attributes to an existing custom class. These attributes have individual getter and setter methods defined in a loop across all attributes to be created and are assigned with setattr() within the loop. Please see code below. If I use setattr(), the newly created attributes point towards the last attribute that was created in the list. Whereas if I use setattr() within an eval() statement, it works correctly, i.e. each attribute pointing towards its own value. I think I kinda understand why - something about the scope of the functions and setattr() in the loop - but I can't really formulate it into words. Can someone smarter than me please give a formal explanation about what is going on and if there is a cleaner solution than using eval()? for attr in attr_list: def getter(self): return getattr(self, attr) def setter(self, value): setattr(self, attr, value) # DOES NOT WORK setattr(self.__class__, attr, property(getter, setter)) # WORKS eval(f'setattr(self.__class__, \'{attr}\', property(lambda self: getattr(self, \'{attr}\'), lambda self, value: setattr(self, {attr}, value)))')
You create different getters and setters at each iteration but they all use the attr variable from the outer scope; this means that when you access a getter/setter then it uses the current value of the attr variable, which will be the final value from the loop. You need to have attr in a different scope for each getter and setter so that each refers to the corresponding attr value. Note: you also want the getter/setter to refer to a different attribute or you will enter an infinite loop. I.e. _attr. def create_property(attr): private_attr = f"_{attr}" def getter(self): return getattr(self, private_attr) def setter(self, value): setattr(self, private_attr, value) return property(getter, setter) attr_list = ["a", "b", "c"] class Test: pass for attr in attr_list: setattr(Test, attr, create_property(attr)) test1 = Test() test2 = Test() test1.a = 42 test1.b = 17 test2.a = 96 test2.b = 123 print(test1.a, test1.b, test1._a, test1._b) print(test2.a, test2.b, test2._a, test2._b) Outputs: 42 17 42 17 96 123 96 123 fiddle
1
2
79,674,833
2025-6-22
https://stackoverflow.com/questions/79674833/scipy-optimizewarning-covariance-of-the-parameters-could-not-be-estimated-when
I'm trying to plot some data with a non-linear fit using the function: kB and Bv being a constant while J'' is the independent variable and T is a parameter. I tried to do this in Python: #Constants kb = 1.380649e-23 B = 0.3808 #Fit function def func(J, T): return (B*(2*J+1) / kb*T * np.exp(-B*J*(J+1)/kb*T)) popt, pcov = optimize.curve_fit(func, x, y) print(popt) plt.plot(x, y, "o") plt.plot(x, func(j, popt[0])) But this results in the warning "OptimizeWarning: Covariance of the parameters could not be estimated warnings.warn('Covariance of the parameters could not be estimated'" and the parameter turns out to be 1. This is the data: x = [2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48] y = [0.185,0.375,0.548,0.695,0.849,0.931,0.996,0.992,1.0,0.977,0.927,0.834,0.691,0.68,0.575,0.479,0.421,0.351,0.259,0.208,0.162,0.135,0.093,0.066]
Why? You have three problems in your model: Priority of operations are not respected, as pointed by @jared, you must add parenthesis around the term kb * T; Target dataset is normalized to unity where it is its area which is unitary (it is a distribution). therefore such normalization prevents the model to converge. Adding the missing parameter N to the model allows convergence; Scale issue and/or units issue, your Boltzmann constant is expressed in J/K but we don't have any clue about the units of B. Recall the term inside the exponential must be dimensionless. Scaling constant to eV/K seems to provide credible temperature. MCVE Bellow a complete MCVE fitting your data: import numpy as np import matplotlib.pyplot as plt from scipy import optimize, integrate from sklearn import metrics #kb = 1.380649e-23 # J/K kb = 8.617333262e-5 # eV/K B = 0.3808 # ? def model(J, T, N): return N * (B * (2 * J + 1) / (kb * T) * np.exp(- B * J * (J + 1)/ (kb * T))) J = np.array([2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48]) N = np.array([0.185,0.375,0.548,0.695,0.849,0.931,0.996,0.992,1.0,0.977,0.927,0.834,0.691,0.68,0.575,0.479,0.421,0.351,0.259,0.208,0.162,0.135,0.093,0.066]) popt, pcov = optimize.curve_fit(model, J, N, p0=[1e5, 1]) #(array([2.51661517e+06, 2.75770698e+01]), # array([[1.17518427e+09, 3.25544771e+03], # [3.25544771e+03, 6.04463335e-02]])) np.sqrt(np.diag(pcov)) # array([3.42809607e+04, 2.45858361e-01]) Nhat = model(J, *popt) metrics.r2_score(N, Nhat) # 0.9940231921241076 Jlin = np.linspace(J.min(), J.max(), 200) Nlin = model(Jlin, *popt) fig, axe = plt.subplots() axe.scatter(J, N) axe.plot(Jlin, Nlin) axe.set_xlabel("$J''$") axe.set_ylabel("$N(J'')$") axe.grid() Regressed values are about T=2.51e6 ± 0.03e6 and N=2.76e1 ± 0.02e1, both have two significant figures. The fit is reasonable (R^2 = 0.994) and looks like: If your Boltzmann constant is expressed in J/K, temperature T is about 1.6e+25 which seems a bit too much for such a variable. Check We can check the area under the curve is indeed unitary: Jlin = np.linspace(0, 100, 2000) Nlin = model(Jlin, *popt) I = integrate.trapezoid(Nlin / popt[-1], Jlin) # 0.9999992484190978
5
4
79,675,821
2025-6-23
https://stackoverflow.com/questions/79675821/force-python-sqlite3-to-report-non-existing-columns-in-where-clause
This is my code: import sqlite3 conn = sqlite3.connect(':memory:') cursor = conn.cursor() cursor.execute(''' CREATE TABLE schools ( county TEXT, statustype TEXT ) ''') cursor.execute(''' INSERT INTO schools VALUES ('some_county', 'some_status') ''') cursor.execute(''' SELECT COUNT(*) FROM schools WHERE county = 'Alpine' AND statustype IN ('Active', 'Closed') AND "School Type" = 'District Community Day Schools' ''') result = cursor.fetchone()[0] print(f"Count result: {result}") conn.close() Note that there is intentionally no 'School Type' column in the database schema in this example. The result is 0. Is possible to change some settings of SQLite3 or of the database in order to get an error about non-existing column instead?
SQLite has a quirk when it comes to delimiting identifiers. In ANSI SQL double quoting an column name does what you expect i.e. delimits the column name. However in SQLite you get the following behaviour (from the docs) ... in an effort to be compatible with MySQL 3.x (which was one of the most widely used RDBMSes when SQLite was first being designed) SQLite will also interpret a double-quotes string as string literal if it does not match any valid identifier. This misfeature means that a misspelled double-quoted identifier will be interpreted as a string literal, rather than generating an error. It also lures developers who are new to the SQL language into the bad habit of using double-quoted string literals when they really need to learn to use the correct single-quoted string literal form. Therefore, because the column doesn't exist, you end up comparing strings to strings e.g. "School Type" = 'District Community Day Schools' which is always false. Therefore you need to delimit the column name using either square brackets ([]) or backticks (``). e.g. you should use [column name here] or `column name here` after which you'll get an error as such: sqlite3.OperationalError: no such column: Of course in general its actually better to not use spaces in the first place (eg. school_type). Note: The same documentation goes on to say: As of SQLite 3.29.0 (2019-07-10) the use of double-quoted string literals can be disabled at run-time... Which would seem to be a good practice in order to avoid this sort of confusion.
1
1
79,675,526
2025-6-23
https://stackoverflow.com/questions/79675526/how-to-edit-an-element-inside-of-a-tkinter-grid
I'm working on a project to design a chessboard. I have successfully created the board. My goal is to edit the board. My original idea is to use buttons with the Tkinter grid to make the moves work. I decided to make the board using: window = tk.Tk() # Creating the GUI and Display window.geometry("800x500") window.minsize(800, 500) #Locking the size of the Display window.maxsize(800, 500) #Adding the board SeenBoard = tk.Frame(window, width=300, height=300,padx=5,pady=5) #Adds padding to the grid SeenBoard.pack(side=tk.LEFT) #the left isnt needed yet, but will be later for row in range(8): # 8 Rows along for column in range(8): #8 Columns down tk.Button(SeenBoard,text=f"{row},{column}",width=5,height=3).grid(row=row, column=column) #Creates each button window.mainloop() #Idk what this really does? it just kinda works. My problem is that with this way of making the board, I cannot edit the buttons, nor locate them. I wish to be able to change the color of each button, to alternate between green and white, for that grid pattern feel. In order to do this I only can think of two ways at least, and that would be to somehow give each of the buttons an identifier beforehand, such as labeling them all as Grid1_1.button(...) My problem with doing it manually is that I want to keep the code concise and not repetitive, so anyway in which I could add lots of them at once would solve my problem. Second solution to my knowledge: Somehow having the ability with Tkinter to select a specific grid slot, such as 2,2. and then trying to modify it with that, I heard about grid slaves, but I'm too small brained to understand how it works. My request: I would like to know the solution or even the concept behind trying to alternate the color of the board in a pattern. any method Any solution that would allow me to access a button, and change its text/color.
To change the color of the checkerboard squares, one can make use of modulo 2. Here's a simple example showcasing that: from tkinter import tk COLOR_1 = "white" COLOR_2 = "green" for row in range(8): # 8 Rows along for column in range(8): #8 Columns down if ((row % 2) + column) % 2: color = COLOR_1 else: color = COLOR_2 tk.Button(SeenBoard,text=f"{row},{column}", width=5, height=3, bg=color).grid(row=row, column=column) The reason row % 2 is added to column is to achieve an alternating pattern. Consider a simple example of a 2x2 chessboard. row col row % 2 ( (row % 2) + col ) % 2 0 0 0 0 0 1 0 1 1 0 1 1 1 1 1 0 Which gives a pattern: +---+---+ | 0 | 1 | +---+---+ | 1 | 0 | +---+---+ To access button objects in the future, simply store them in a nested list, using your row and column numbers as indexes: # create a list to hold the buttons array = list() for row in range(8): # Create an array to store a row of buttons array.append(list()) for column in range(8): # Append each column to the corresponding row array[row].append(tk.Button(SeenBoard,text=f"{row},{column}", width=5, height=3, )) # Add it to the grid array[row][column].grid(row=row, column=column) # Modify by using rows & columns as indexes array[3][2]["text"] = "It works!" This will also prevent your buttons from being garbage collected, since you're holding a reference to them. Output: Notice how the button at (3,2) is modified to say It works!. As for the last question, the mainloop runs an event loop, which does two things: It prevents the program from exiting after the last line of code. It checks for events (such as a button press). (In reality, things might work differently, but this is what it generally does)
2
2
79,677,119
2025-6-24
https://stackoverflow.com/questions/79677119/type-hinting-callable-with-positional-keyword-only-arguments-without-typing-prot
I'm wondering if it is possible to type-hint a Callable with positional- / keyword-only argument without using typing.Protocol. For example in this snippet - how would I type-hint x correctly so that add is assignable to it: from typing import Callable def add(arg: int, /, *, other: str) -> str: return f"{arg}{other}" x: Callable[[int, str], str] = add # Type Error: # `(arg: int, /, *, other: str) -> str` is not assignable to `(int, str) -> str`
In short no. A combination is (currently) not possible, as keyword only parameters are not possible with Callable, as it describes positional-only parameters - you need a Protocol for more specific typing. To quote the specs: Parameters specified using Callable are assumed to be positional-only. The Callable form provides no way to specify keyword-only parameters, variadic parameters, or default argument values. For these use cases, see the section on Callback protocols. A bit more on assignability; a function with standard parameters (keyword or positional) can be assigned to a function with any parameter type. The reverse, if you have a function that is keyword-only / positional-only it can only be assigned to a matching type, i.e. in your case you need a Protocol that reflects these parameter types exactly. from typing import Callable, Protocol class Standard(Protocol): def __call__(self, a: int) -> None: ... class KwOnly(Protocol): def __call__(self, *, a: int) -> None: ... class PosOnly(Protocol): def __call__(self, a: int, /) -> None: ... CallableType = Callable[[int], None] def func(standard: Standard, kw_only: KwOnly, callable: CallableType, pos_only: PosOnly): # Standard assignable to all f1a: KwOnly = standard # OK f1b: CallableType = standard # OK f1c: PosOnly = standard # OK # Keyword-Only assignable to keyword-only f2a: Standard = kw_only # error f2b: CallableType = kw_only # error f2c: PosOnly = kw_only # error # CallableType and PosOnly are equivalent; only assignable to position-only/Callable f3a: Standard = callable # error f3b: KwOnly = callable # error f3c: PosOnly = callable # OK - as equivalent f4a: Standard = pos_only # error f4b: KwOnly = pos_only # error f4c: CallableType = pos_only # OK - as equivalent I am not aware that there are any plans to change extend Callable in its behavior, e.g. accept keyword-only via TypedDicts.
3
1
79,676,972
2025-6-24
https://stackoverflow.com/questions/79676972/python-while-loop-is-executed-but-then-it-fails-to-execute-an-if-statement-afte
I am working on a simple 2 player dice rolling game and have found a problem that i haven't been able to fix, after a While loop, it refuses to execute an if statement def Main(): global player1_score, player2_score rounds_input = int(input('How many rounds do you want?: ')) i = rounds_input n = 0 while n < i: gameFunc() n += 1 else: if player1_score > player1_score: print("Player 1 Wins Game", count) elif player1_score < player1_score: print("Player 2 Wins Game", count) The code is meant to take users input and execute gameFunc() that many times, once this has been completed, I want it to check who won the most and print the corresponding Player Wins Game x. Where x is how many times game is read in a text file called leaderboard.txt + 1 def gameFunc(): global player1_score, player2_score player1_roll = random.randint(0, 20) player2_roll = random.randint(0, 20) print("player 1 rolled", player1_roll) print("player 2 rolled", player2_roll) if player1_roll > player2_roll: print("Player 1 Wins!!" + "\n") player1_score += 1 elif player1_roll < player2_roll: print("Player 2 Wins!!" + "\n") player2_score += 1 else: print("It's a Tie!!" + "\n") The entire code is as follows: import random word = "game" count = 0 game = count + 1 player1_score = 0 player2_score = 0 with open("leaderboard.txt", 'r') as f: for line in f: words = line.split() for i in words: if(i==word): count=count+1 print("Occurrences of the word", word,"is", count) print("Game number ", game) def gameFunc(): global player1_score, player2_score player1_roll = random.randint(0, 20) player2_roll = random.randint(0, 20) print("player 1 rolled", player1_roll) print("player 2 rolled", player2_roll) if player1_roll > player2_roll: print("Player 1 Wins!!" + "\n") player1_score += 1 elif player1_roll < player2_roll: print("Player 2 Wins!!" + "\n") player2_score += 1 else: print("It's a Tie!!" + "\n") def Main(): global player1_score, player2_score rounds_input = int(input('How many rounds do you want?: ')) i = rounds_input n = 0 while n < i: gameFunc() n += 1 else: if player1_score > player1_score: print("Player 1 Wins Game", count) elif player1_score < player1_score: print("Player 2 Wins Game", count) def Start(): print("Do you want to begin?") beginPrompt = input('Type START to begin: ').lower() if beginPrompt == 'start': print("\n") Main() else: print("\n" + "Try Again" + "\n") Start() Start()
In your code in the logic of the final if statement in the Main() function: if player1_score > player1_score: This line always set to False because you're comparing player1_score to itself, not to player2_score. Just like this one: elif player1_score < player1_score: It should be something like this: if player1_score > player2_score: print("Player 1 Wins Game", count + 1) # You want something "count + 1" here elif player1_score < player2_score: print("Player 2 Wins Game", count + 1) else: print("It's a Draw!") Also, you're printing Game, but you're using the value count from how many times "game" appeared in the leaderboard file. You set: game = count + 1 But then you never use game, only count, which is the number of previous games. So you probably want to show the next game number as count + 1. Your def Main(): should look like this: def Main(): global player1_score, player2_score rounds_input = int(input('How many rounds do you want?: ')) i = rounds_input n = 0 while n < i: gameFunc() n += 1 else: if player1_score > player2_score: print("Player 1 Wins Game", count + 1) elif player1_score < player2_score: print("Player 2 Wins Game", count + 1) else: print("It's a Draw!")
1
2
79,678,764
2025-6-25
https://stackoverflow.com/questions/79678764/how-can-i-vectorize-a-function-that-returns-eigenvalues-and-eigenvectors-of-a-ma
I'm working with a function in Python that constructs a 4×4 matrix based on inputs (x1, y1, x2, y2), and computes its eigenvalues and eigenvectors using np.linalg.eigh. Here is a simplified version of my code: import numpy as np def f(kx, ky): return kx + 1j * ky def fs(kx, ky): return np.conj(f(kx, ky)) def eig(x1, y1, x2, y2): a = 10 x = x1 + x2 y = y1 + y2 H = np.array([ [a, f(x, y), f(x, y), fs(x, y)], [fs(x, y), a, 0, f(x, y)], [fs(x, y), 0, -a, f(x, y)], [f(x, y), fs(x, y), fs(x, y), -a] ]) Eval, Evec = np.linalg.eigh(H) sorted_indices = np.argsort(Eval) return Eval[sorted_indices], Evec[:, sorted_indices] Now, I have 1-d arrays of input values: x1_array, y1_array, x2_array, y2_array # all same shape I want to efficiently vectorize this function across those arrays — i.e., compute all eigenvalues/eigenvectors in a batched way without writing an explicit Python loop if possible.
It is a bit of stacking, but you can create a matrix that is your batch size times 4x4 and pass it to np.linalg.eigh. Also note the slight optimization avoiding multiple evaluations of f(x, y) and fs(x, y). def eig_vectorized(x1_array, y1_array, x2_array, y2_array): a = 10 x_array = x1_array + x2_array y_array = y1_array + y2_array f_xy = f(x_array, y_array) # Optimization, you don't want to recompute fs_xy = fs(x_array, y_array) # these two again and again # Create H as an array-sizex4x4 matrix H = np.stack([ np.stack([np.full_like(f_xy, a), f_xy, f_xy, fs_xy], axis=-1), np.stack([fs_xy, np.full_like(f_xy, a), np.zeros_like(f_xy), f_xy], axis=-1), np.stack([fs_xy, np.zeros_like(f_xy), -np.full_like(f_xy, a), f_xy], axis=-1), np.stack([f_xy, fs_xy, fs_xy, -np.full_like(f_xy, a)], axis=-1) ], axis=-2) Evals, Evecs = np.linalg.eigh(H) # Compute eigenvalues and -vectors # Sort eigenvalues and eigenvectors sorted_indices = np.argsort(Evals, axis=-1) Evals = np.take_along_axis(Evals, sorted_indices, axis=-1) Evecs = np.take_along_axis(Evecs, sorted_indices[..., None], axis=-1) return Evals, Evecs
1
1
79,680,007
2025-6-26
https://stackoverflow.com/questions/79680007/is-this-a-bug-in-numpy-histogram-function
This Python 3.12.7 script with numpy 2.2.4: import numpy as np a = np.random.randint(0, 256, (500, 500)).astype(np.uint8) counts, bins = np.histogram(a, range(0, 255, 25)) print(np.column_stack((counts, bins[:-1], bins[1:]))) counts, bins = np.histogram(a, range(0, 257, 16)) print(np.column_stack((counts, bins[:-1], bins[1:]))) produces this kind of output: [[24721 0 25] [24287 25 50] [24413 50 75] [24441 75 100] [24664 100 125] [24390 125 150] [24488 150 175] [24355 175 200] [24167 200 225] [25282 225 250]] [[15800 0 16] [15691 16 32] [15640 32 48] [15514 48 64] [15732 64 80] [15506 80 96] [15823 96 112] [15724 112 128] [15629 128 144] [15681 144 160] [15661 160 176] [15558 176 192] [15526 192 208] [15469 208 224] [15772 224 240] [15274 240 256]] where the first histogram always has the highest count in bin [225, 250). The second histogram indicates a uniform distribution, as expected. I tried a dozen of times and the anomaly was always there. Can someone explain this behavior?
I think the docs explain pretty well what's happening, but are spread out in two different places. First, the range range(0, 255, 25) is supplying the bins parameter, not the range parameter. Secondly, the Notes section states: All but the last (righthand-most) bin is half-open. In other words, if bins is: [1, 2, 3, 4] then the first bin is [1,2) (including 1, but excluding 2) and the second [2,3). The last bin, however, is [3,4], which includes 4. Pretty sure the extra counts in your case are the number of elements that equal 250. This makes sense, since the increase is about 1/25th of the bin size compared to the other bins, which all have a width of 25.
2
3
79,680,409
2025-6-26
https://stackoverflow.com/questions/79680409/cant-get-frequency-data-right-for-seaborn-histogram
I have a list containing thousands of survey answers ranging from 'A' to 'I', with the addition of None for blanks: Raw data list example: [None, 'A', 'G', None, None, 'I', 'B', ...<snip>... , 'C'] I would like to produce a Seaborn histogram showing the frequency of each response (in alphabetical order), something similar to this Excel chart: Desired outcome (Excel version) (Numbers are different here than in test data below) The closest I get is this, with all columns incorrectly equal to 1: I tried histplot, catplot and displot as per some random tutorials online. I am unsure of how to process the list of raw values into the right format for Seaborn. I have followed many different tutorials doing things like sns.histplot(data=df, x="Frequency") but can't get the dataframe right before this step. Main method I tried for frequency (from some tutorials): df = pd.DataFrame.from_dict(Counter(values), orient='index').reset_index() Any suggestions will be humbly received. Much appreciation in advance. Minimum reproducible example: import matplotlib.pyplot as plt from collections import Counter import seaborn as sns import pandas as pd # Test data - real data is much larger and extracted as a list from an SQLite3 DB. values = ['E', None, 'B', 'H', 'I', 'A', None, None, 'D', 'C', 'E', 'I', 'C', 'B', None, 'A', None, 'E', 'F', 'H', 'A', 'D', 'A', 'A', 'F', 'A', 'C', 'C', 'H', 'E', None, 'B', 'E', 'I', 'G', 'A', 'I', 'A', 'B', 'I'] sns.set_theme(style="darkgrid") df = pd.DataFrame.from_dict(Counter(values), orient='index').reset_index() df.columns = ['Choice', 'Frequency'] sns.histplot(data=df, x='Choice') plt.show()
Since you have pre-aggregated data, you should use a barplot, not a histplot that is designed to perform the aggregation. For the alphabetical order, use sort_index: df = (pd.DataFrame.from_dict(Counter(values), orient='index') .sort_index().reset_index() ) df.columns = ['Choice', 'Frequency'] sns.barplot(data=df, x='Choice', y='Frequency') Output: Note that you do not need to use a Counter/DataFrame, you could directly go with a countplot: sns.countplot(values) Output:
2
0
79,680,212
2025-6-26
https://stackoverflow.com/questions/79680212/using-python-pptx-how-can-i-display-the-in-my-chart-data
I would like to create a chart on a powerpoint slide with pourcent datas , python doesn't let my directly put the % symbol on my data so i had to only put the datas , but i want to show the % symbol on the chart and also on the axes So heres my code : if slide: for shape in slide.shapes: if shape.shape_type == MSO_SHAPE_TYPE.CHART: chart = shape.chart if result and moisDate and nameIntake : chart_data = CategoryChartData() chart_data.categories = nameIntake for moisSingleDate in moisDate: chart_data.add_series( str(moisSingleDate) , values= ( result[moisSingleDate]) ,number_format= '0%') chart.replace_data(chart_data) print(result) but unfortunatly my datas gets multiplied by 100... instead of displaying 24.93% i get 2493% does anyone knows how to do it please ?
You have to divide by 100 in-place. Percentage formatting expects decimal values normalized to 0.0..1.0 as 0..100%. for moisSingleDate in moisDate: # Divide values by 100 to correctly format as percentages percentage_values = [value / 100 for value in result[moisSingleDate]] chart_data.add_series(str(moisSingleDate), values=percentage_values, number_format='0%')
1
2
79,681,704
2025-6-27
https://stackoverflow.com/questions/79681704/i-have-an-np-array-of-a-number-single-entry-lists-and-i-want-to-add-1-to-each-si
I have created the following array, called X: array([[6.575], [6.421], [7.185], [6.998], [6.43 ], [6.012], [6.172], [5.631], [6.004], [6.377], [6.03 ]]) and I would like to create array([[6.575, 1], [6.421, 1], [7.185, 1], [6.998, 1], [6.43, 1], [6.012, 1], [6.172, 1], [5.631, 1], [6.004, 1], [6.377, 1], [6.03, 1]]) I have tried X = np.array( [ [value,1] for value in X ] ) but I get the error: ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (506, 2) + inhomogeneous part. Aside from ignorance I am not sure where I am going wrong. Any ideas?
Do not use list comprehensions/loops to work with numpy arrays. You should combine expand the scalar with broadcast_to and combine it to X with hstack: out = np.hstack([X, np.broadcast_to(1, X.shape)]) Output: array([[6.575, 1. ], [6.421, 1. ], [7.185, 1. ], [6.998, 1. ], [6.43 , 1. ], [6.012, 1. ], [6.172, 1. ], [5.631, 1. ], [6.004, 1. ], [6.377, 1. ], [6.03 , 1. ]]) Now, just to answer the source of your error, this this because your intermediate list comprehension generates items of the form [array([6.575]), 1] instead of [6.575, 1]. The correct approach should have been: np.array([[value, 1] for value in X[:, 0]]) # or np.array([[*value, 1] for value in X]) But, again, do not do this in numpy.
4
4
79,682,876
2025-6-28
https://stackoverflow.com/questions/79682876/how-to-create-same-ticks-and-labels-on-both-axes
I have to create a plot on which ticks and labels are specifically defined; an example of reproducible plot is given below: import matplotlib.pyplot as plt import seaborn as sns plt.style.use('seaborn-v0_8') fig, ax = plt.subplots(figsize=(4, 4)) ticks = [0.00, 0.25, 0.50, 0.75, 1.00] ax.set_xticks(ticks) ax.set_xticklabels(ticks, weight='bold', size=8) ax.set_yticks(ticks) ax.set_yticklabels(ticks, weight='bold', size=8) plt.show() As ticks and labels are exactly the same of both axes, is there a way to set them as a single command ? Something mixing both xticks and yticks ?
You can use a function. import matplotlib.pyplot as plt plt.style.use('seaborn-v0_8') def set_ticks_and_labels(ax, t, **kwargs): ax.set_xticks(t) ax.set_xticklabels(t, **kwargs) ax.set_yticks(t) ax.set_yticklabels(t, **kwargs) fig, ax = plt.subplots(figsize=(4, 4)) ticks = [0.00, 0.25, 0.50, 0.75, 1.00] set_ticks_and_labels(ax, ticks, weight='bold', size=8) plt.show()
1
1