Q_CreationDate
stringlengths 23
23
| Title
stringlengths 11
149
| Question
stringlengths 25
6.53k
| Answer
stringlengths 15
5.1k
| Score
float64 -1
1.2
| Is_accepted
bool 2
classes | N_answers
int64 1
17
| Q_Id
int64 0
6.76k
|
---|---|---|---|---|---|---|---|
2019-03-17 20:51:04.933
|
What is the preferred way to a add a citation suggestion to python packages?
|
How should developers indicate how users should cite the package, other than on the documentation?
R packages return the preferred citation using citation("pkg").
I can think of pkg.CITATION, pkg.citation and pkg.__citation__. Are there others? If there is no preferred way (which seems to be the case to me as I did not find anything on python.org), what are the pros and cons of each?
|
Finally I opted for the dunder option. Only the dunder option (__citation__) makes clear, that this is not a normal variable needed for runtime.
Yes, dunder strings should not be used inflationary because python might use them at a later time. But if python is going to use __citation__, then it will be for a similar purpose. Also, I deem the relative costs higher with the other options.
| 1.2 | true | 1 | 6,002 |
2019-03-18 14:05:53.610
|
How to see the full previous command in Pycharm Python console using a shortcut
|
I was wondering how I could see the history in the Pycharm Python console using a shortcut. I can see the history using the upper arrow key, but If I want to go further back in history I have to go to each individual line if more lines are ran at the time. Is it possible that each time I press a button the full previous commands that are ran are shown?
I don't want to search in history, I want to go back in history similar using arrow up key but each time I enter arrow up I want to see the previous full code that was ran.
|
Go to preferences -> Appereance & Behaviour -> Keymap. You can search for "Browse Console History" and add a keyboard shortcut with right click -> Add Keyboard shortcut.
| 0 | false | 1 | 6,003 |
2019-03-18 17:28:51.877
|
Python how to to make set of rules for each class in a game
|
in C# we have to get/set to make rules, but I don't know how to do this in Python.
Example:
Orcs can only equip Axes, other weapons are not eligible
Humans can only equip swords, other weapons are eligible.
How can I tell Python that an Orc cannot do something like in the example above?
Thanks for answers in advance, hope this made any sense to you guys.
|
Python language doesn't have an effective mechanism for restricting access to an instance or method. There is a convention though, to prefix the name of a field/method with an underscore to simulate "protected" or "private" behavior.
But, all members in a Python class are public by default.
| 0 | false | 1 | 6,004 |
2019-03-18 18:58:53.487
|
Regex to get key words, all digits and periods
|
My input text looks like this:
Put in 3 extenders but by the 4th floor it is weak on signal these don't piggy back of each other. ST -99. 5G DL 624.26 UP 168.20 4g DL 2
Up .44
I am having difficulty writing a regex that will match any instances of 4G/5G/4g/5g and give me all the corresponding measurements after the instances of these codes, which are numbers with decimals.
The output should be:
5G 624.26 168.20 4g 2 .44
Any thoughts how to achieve this? I am trying to do this analysis in Python.
|
I would separate it in different capture group like this:
(?i)(?P<g1>5?4?G)\sDL\s(?P<g2>[^\s]*)\sUP\s(?P<g3>[^\s]*)
(?i) makes the whole regex case insensitive
(?P<g1>5?4?G) is the first group matching on either 4g, 5g, 4G or 5G.
(?P<g2>[^\s]*) is the second and third group matching on everything that is not a space.
Then in Python you can do:
match = re.match('(?i)(?P<g1>5?4?G)\sDL\s(?P<g2>[^\s]*)\sUP\s(?P<g3>[^\s]*)', input)
And access each group like so:
match.group('g1') etc.
| 0.135221 | false | 1 | 6,005 |
2019-03-19 03:35:02.983
|
In Zapier, how do I get the inputs to my Python "Run Code" action to be passed in as lists and not joined strings?
|
In Zapier, I have a "Run Python" action triggered by a "Twitter" event. One of the fields passed to me by the Twitter event is called "Entities URLs Display URL". It's the list of anchor texts of all of the links in the tweet being processed.
Zapier is passing this value into my Python code as a single comma-separated string. I know I can use .split(',') to get a list, but this results in ambiguity if the original strings contained commas.
Is there some way to get Zapier to pass this sequence of strings into my code as a sequence of strings rather than as a single joined-together string?
|
David here, from the Zapier Platform team.
At this time, all inputs to a code step are coerced into strings due to the way data is passed between zap steps. This is a great request though and I'll make a note of it internally.
| 0.673066 | false | 1 | 6,006 |
2019-03-19 07:09:31.563
|
Where is the tesseract executable file located on MacOS, and how to define it in Python?
|
I have made a code using pytesseract and whenever I run it, I get this error:
TesseractNotFoundError: tesseract is not installed or it's not in your path
I have installed tesseract using HomeBrew and also pip installed it.
|
If installed with Homebrew, it will be located in /usr/local/bin/tesseract by default. To verify this, run which tesseract in the terminal as Dmitrrii Z. mentioned.
If it's there, you can set it up in your python environment by adding the following line to your python script, after importing the library:
pytesseract.pytesseract.tesseract_cmd = r'/usr/local/bin/tesseract'
| 0.673066 | false | 1 | 6,007 |
2019-03-19 09:10:22.167
|
Call function from file that has already imported the current file
|
If I have the files frame.py and bindings.py both with classes Frame and Bindings respectively inside of them, I import the bindings.py file into frame.py by using from bindings import Bindings but how do I go about importing the frame.py file into my bindings.py file. If I use import frame or from frame import Frame I get the error ImportError: cannot import name 'Bindings' from 'bindings'. Is there any way around this without restructuring my code?
|
Instead of using from bindings import Bindings try import bindings.
| 0 | false | 1 | 6,008 |
2019-03-20 10:03:02.943
|
How to only enter a date that is a weekday in Python
|
I'm creating a web applcation in Python and I only want the user to be able to enter a weekday that is older than today's date. I've had a look at isoweekday() for example but don't know how to integrate it into a flask form. The form currently looks like this:
appointment_date = DateField('Appointment Date', format='%Y-%m-%d', validators=[DataRequired()])
Thanks
|
If you just want a weekday, you should put a select or a textbox, not a date picker.
If you put a select, you can disable the days before today so you don't even need a validation
| 0 | false | 1 | 6,009 |
2019-03-20 23:43:33.130
|
Speed up access to python programs from Golang's exec packaqe
|
I need suggestions on how to speed up access to python programs when called from Golang. I really need fast access time (very low latency).
Approach 1:
func main() {
...
...
cmd = exec.Command("python", "test.py")
o, err = cmd.CombinedOutput()
...
}
If my test.py file is a basic print "HelloWorld" program, the execution time is over 50ms. I assume most of the time is for loading the shell and python in memory.
Approach 2:
The above approach can be speeded up substantially by having python start a HTTP server and then gaving the Go code POST a HTTP request and get the response from the HTTP server (python). Speeds up response times to less than 5ms.
I guess the main reason for this is probably because the python interpretor is already loaded and warm in memory.
Are there other approaches I can use similar to approach 2 (shared memory, etc.) which could speed up the response from my python code?. Our application requires extremely low latency and the 50 ms I am currently seeing from using Golang's exec package is not going to cut it.
thanks,
|
Approach 1: Simple HTTP server and client
Approach 2: Local socket or pipe
Approach 3: Shared memory
Approach 4: GRPC server and client
In fact, I prefer the GRPC method by stream way, it will hold the connection (because of HTTP/2), it's easy, fast and secure. And it's easy moving python node to another machine.
| 0 | false | 1 | 6,010 |
2019-03-21 20:01:04.153
|
Python: Iterate through every pixel in an image for image recognition
|
I'm a newbie in image processing and python in general. For an image recognition project, I want to compare every pixel with one another. For that, I need to create a program that iterates through every pixel, takes it's value (for example "[28, 78, 72]") and creates some kind of values through comparing it to every other pixel. I did manage to access one single number in an array element /pixel (output: 28) through a bunch of for loops, but I just couldn't figure out how to access every number in every pixel, in every row. Does anyone know a good algorithm to solve my problem? I use OpenCV for reading in the image by the way.
|
Comparing every pixel with a "pattern" can be done with convolution. You should take a look at Haar cascade algorithm.
| 0 | false | 1 | 6,011 |
2019-03-21 20:38:04.357
|
numpy.savetxt() rounding values
|
I'm using numpy.savetxt() to save an array, but its rounding my values to the first decimal point, which is a problem. anyone have any clue how to change this?
|
You can set the precision through changing fmt parameter. For example np.savetxt('tmp.txt',a, fmt='%1.3f') would leave you with an output with the precision of first three decimal points
| 0.386912 | false | 1 | 6,012 |
2019-03-22 03:06:43.583
|
Training SVM in Python with pictures
|
I have basic knowledge of SVM, but now I am working with images. I have images in 5 folders, each folder, for example, has images for letters a, b, c, d, e. The folder 'a' has images of handwriting letters for 'a, folder 'b' has images of handwriting letters for 'b' and so on.
Now how can I use the images as my training data in SVM in Python.
|
as far i understood you want to train your svm to classify these images into the classes named a,b,c,d . For that you can use any of the good image processing techniques to extract features (such as HOG which is nicely implemented in opencv) from your image and then use these features , and the label as the input to your SVM training (the corresponding label for those would be the name of the folders i.e a,b,c,d) you can train your SVM using the features only and during the inference time , you can simply calculate the HOG feature of the image and feed it to your SVM and it will give you the desired output.
| 0 | false | 1 | 6,013 |
2019-03-22 12:32:50.940
|
How to execute script from container within another container?
|
I have a contanarized flask app with external db, that logs users on other site using selenium. Everything work perfectly in localhost. I want to deploy this app using containers and found selenium container with google chrome within could make the job. And my question is: how to execute scripts/methods from flask container in selenium container? I tried to find some helpful info, but I didn't find anything.
Should I make an API call from selenium container to flask container? Is it the way or maybe something different?
|
As far as i understood, you are trying to take your local implementation, which runs on your pc and put it into two different docker containers. Then you want to make a call from the selenium container to your container containing the flask script which connects to your database.
In this case, you can think of your containers like two different computers. You can tell docker do create an internal network between these two containers and send the request via API call, like you suggested. But you are not limited to this approach, you can use any technique, that works for two computers to exchange commands.
| 1.2 | true | 1 | 6,014 |
2019-03-22 21:15:34.407
|
Visual Studio doesn't work with Anaconda environment
|
I downloaded VS2019 preview to try how it works with Python.
I use Anaconda, and VS2019 sees the Anaconda virtual environment, terminal opens and works but when I try to launch 'import numpy', for example, I receive this:
An internal error has occurred in the Interactive window. Please
restart Visual Studio. Intel MKL FATAL ERROR: Cannot load
mkl_intel_thread.dll. The interactive Python process has exited.
Does anyone know how to fix it?
|
I had same issue, this worked for me:
Try to add conda-env-root/Library/bin to the path in the run environment.
| 0 | false | 1 | 6,015 |
2019-03-24 17:23:41.657
|
Automatically filled field in model
|
I have some model where there are date field and CharField with choices New or Done, and I want to show some message for this model objects in my API views if 2 conditions are met, date is past and status is NEW, but I really don't know how I should resolve this.
I was thinking that maybe there is option to make some field in model that have choices and set suitable choice if conditions are fulfilled but I didn't find any information if something like this is possible so maybe someone have idea how resolve this?
|
You need override the method save of your model. An overrided method must check the condition and show message
You may set the signal receiver on the post_save signal that does the same like (1).
| 0 | false | 1 | 6,016 |
2019-03-25 03:15:40.920
|
how to drop multiple (~5000) columns in the pandas dataframe?
|
I have a dataframe with 5632 columns, and I only want to keep 500 of them. I have the columns names (that I wanna keep) in a dataframe as well, with the names as the row index. Is there any way to do this?
|
Let us assume your DataFrame is named as df and you have a list cols of column indices you want to retain. Then you should use:
df1 = df.iloc[:, cols]
This statement will drop all the columns other than the ones whose indices have been specified in cols. Use df1 as your new DataFrame.
| 0 | false | 1 | 6,017 |
2019-03-26 17:26:01.377
|
How to configure PuLP to call GLPK solver
|
I am using the PuLP library in Python to solve an MILP problem. I have run my problem successfully with the default solver (CBC). Now I would like to use PuLP with another solver (GLPK). How do I set up PuLP with GLPK?
I have done some research online and found information on how to use GLPK (e.g. with lp_prob.solve(pulp.GLPK_CMD())) but haven't found information on how to actually set up PuLP with GLPK (or any other solver for that matter), so that it finds my GLPK installation. I have already installed GLPK seperately (but I didn't add it to my PATH environment variable).
I ran the command pulp.pulpTestAll()
and got:
Solver <class 'pulp.solvers.GLPK_CMD'> unavailable
I know that I should be getting a "passed" instead of an "unavailable" to be able to use it.
|
I had same problem, but is not related with glpk installation, is with solution file create, the message is confusim. My problem was I use numeric name for my variables, as '0238' ou '1342', I add a 'x' before it, then they looked like 'x0238'.
| 0.201295 | false | 2 | 6,018 |
2019-03-26 17:26:01.377
|
How to configure PuLP to call GLPK solver
|
I am using the PuLP library in Python to solve an MILP problem. I have run my problem successfully with the default solver (CBC). Now I would like to use PuLP with another solver (GLPK). How do I set up PuLP with GLPK?
I have done some research online and found information on how to use GLPK (e.g. with lp_prob.solve(pulp.GLPK_CMD())) but haven't found information on how to actually set up PuLP with GLPK (or any other solver for that matter), so that it finds my GLPK installation. I have already installed GLPK seperately (but I didn't add it to my PATH environment variable).
I ran the command pulp.pulpTestAll()
and got:
Solver <class 'pulp.solvers.GLPK_CMD'> unavailable
I know that I should be getting a "passed" instead of an "unavailable" to be able to use it.
|
After reading in more detail the code and testing out some things, I finally found out how to use GLPK with PuLP, without changing anything in the PuLP package itself.
Your need to pass the path as an argument to GLPK_CMD in solve as follows (replace with your glpsol path):
lp_prob.solve(GLPK_CMD(path = 'C:\\Users\\username\\glpk-4.65\\w64\\glpsol.exe')
You can also pass options that way, e.g.
lp_prob.solve(GLPK_CMD(path = 'C:\\Users\\username\\glpk-4.65\\w64\\glpsol.exe', options = ["--mipgap", "0.01","--tmlim", "1000"])
| 1.2 | true | 2 | 6,018 |
2019-03-26 23:03:52.333
|
Tower of colored cubes
|
Consider a set of n cubes with colored facets (each one with a specific color
out of 4 possible ones - red, blue, green and yellow). Form the highest possible tower of k cubes ( k ≤ n ) properly rotated (12 positions of a cube), so the lateral faces of the tower will have the same color, using and evolutionary algorithm.
What I did so far:
I thought that the following representation would be suitable: an Individual could be an array of n integers, each number having a value between 1 and 12, indicating the current position of the cube (an input file contains n lines, each line shows information about the color of each face of the cube).
Then, the Population consists of multiple Individuals.
The Crossover method should create a new child(Individual), containing information from its parents (approximately half from each parent).
Now, my biggest issue is related to the Mutate and Fitness methods.
In Mutate method, if the probability of mutation (say 0.01), I should change the position of a random cube with other random position (for example, the third cube can have its position(rotation) changed from 5 to 12).
In Fitness method, I thought that I could compare, two by two, the cubes from an Individual, to see if they have common faces. If they have a common face, a "count" variable will be incremented with the number of common faces and if all the 4 lateral faces will be the same for these 2 cubes, the count will increase with another number of points. After comparing all the adjacent cubes, the count variable is returned. Our goal is to obtain as many adjacent cubes having the same lateral faces as we can, i.e. to maximize the Fitness method.
My question is the following:
How can be a rotation implemented? I mean, if a cube changes its position(rotation) from 3, to 10, how do we know the new arrangement of the faces? Or, if I perform a mutation on a cube, what is the process of rotating this cube if a random rotation number is selected?
I think that I should create a vector of 6 elements (the colors of each face) for each cube, but when the rotation value of a cube is modified, I don't know in what manner the elements of its vector of faces should be rearranged.
Shuffling them is not correct, because by doing this, two opposite faces could become adjacent, meaning that the vector doesn't represent that particular cube anymore (obviously, two opposite faces cannot be adjacent).
|
First, I'm not sure how you get 12 rotations; I get 24: 4 orientations with each of the 6 faces on the bottom. Use a standard D6 (6-sided die) and see how many different layouts you get.
Apparently, the first thing you need to build is a something (a class?) that accurately represents a cube in any of the available orientations. I suggest that you use a simple structure that can return the four faces in order -- say, front-right-back-left -- given a cube and the rotation number.
I think you can effectively represent a cube as three pairs of opposing sides. Once you've represented that opposition, the remaining organization is arbitrary numbering: any valid choice is isomorphic to any other. Each rotation will produce an interleaved sequence of two opposing pairs. For instance, a standard D6 has opposing pairs [(1, 6), (2, 5), (3, 4)]. The first 8 rotations would put 1 and 6 on the hidden faces (top and bottom), giving you the sequence 2354 in each of its 4 rotations and their reverses.
That class is one large subsystem of your problem; the other, the genetic algorithm, you seem to have well in hand. Stack all of your cubes randomly; "fitness" is a count of the most prevalent 4-show (sequence of 4 sides) in the stack. At the start, this will generally be 1, as nothing will match.
From there, you seem to have an appropriate handle on mutation. You might give a higher chance of mutating a non-matching cube, or perhaps see if some cube is a half-match: two opposite faces match the "best fit" 4-show, so you merely rotate it along that axis, preserving those two faces, and swapping the other pair for the top-bottom pair (note: two directions to do that).
Does that get you moving?
| 0 | false | 1 | 6,019 |
2019-03-27 20:20:56.763
|
Airflow: How to download file from Linux to Windows via smbclient
|
I have a DAG that imports data from a source to a server. From there, I am looking to download that file from the server to the Windows network. I would like to keep this part in Airflow for automation purposes. Does anyone know how to do this in Airflow? I am not sure whether to use the os package, the shutil package, or maybe there is a different approach.
|
I think you're saying you're looking for a way to get files from a cloud server to a windows shared drive or onto a computer in the windows network, these are some options I've seen used:
Use a service like google drive, dropbox, box, or s3 to simulate a synced folder on the cloud machine and a machine in the windows network.
Call a bash command to SCP the files to the windows server or a worker in the network. This could work in the opposite direction too.
Add the files to a git repository and have a worker in the windows network sync the repository to a shared location. This option is only good in very specific cases. It has the benefit that you can track changes and restore old states (if the data is in CSV or another text format), but it's not great for large files or binary files.
Use rsync to transfer the files to a worker in the windows network which has the shared location mounted and move the files to the synced dir with python or bash.
Mount the network drive to the server and use python or bash to move the files there.
All of these should be possible with Airflow by either using python (shutil) or a bash script to transfer the files to the right directory for some other process to pick up or by calling a bash sub-process to perform the direct transfer by SCP or commit the data via git. You will have to find out what's possible with your firewall and network settings. Some of these would require coordinating tasks on the windows side (the git option for example would require some kind of cron job or task scheduler to pull the repository to keep the files up to date).
| 0 | false | 1 | 6,020 |
2019-03-29 18:04:09.080
|
Python GTK+ 3: Is it possible to make background window invisible?
|
basically I have this window with a bunch of buttons but I want the background of the window to be invisible/transparent so the buttons are essentially floating. However, GTK seems to be pretty limited with CSS and I haven't found a way to do it yet. I've tried making the main window opacity 0 but that doesn't seem to work. Is this even possible and if so how can I do it? Thanks.
Edit: Also, I'm using X11 forwarding.
|
For transparency Xorg requires a composite manager running on the X11 server. The compmgr program from Xorg is a minimal composite manager.
| 0 | false | 1 | 6,021 |
2019-03-30 18:02:10.470
|
Matplotlib with Pydroid 3 on Android: how to see graph?
|
I'm currently using an Android device (of Samsung), Pydroid 3.
I tried to see any graphs, but it doesn't works.
When I run the code, it just shows me a black-blank screen temporarily and then goes back to the source code editing window.
(means that i can't see even terminal screen, which always showed me [Program Finished])
Well, even the basic sample code which Pydroid gives me doesn't show me the graph :(
I've seen many tutorials which successfully showed graphs, but well, mine can't do that things.
Unfortunately, cannot grab any errors.
Using same code which worked at Windows, so don't think the code has problem.
Of course, matplotlib is installed, numpy is also installed.
If there's any possible problems, please let me know.
|
I also had this problem a while back, and managed to fix it by using plt.show()
at the end of your code. With matplotlib.pyplot as plt.
| 0.101688 | false | 3 | 6,022 |
2019-03-30 18:02:10.470
|
Matplotlib with Pydroid 3 on Android: how to see graph?
|
I'm currently using an Android device (of Samsung), Pydroid 3.
I tried to see any graphs, but it doesn't works.
When I run the code, it just shows me a black-blank screen temporarily and then goes back to the source code editing window.
(means that i can't see even terminal screen, which always showed me [Program Finished])
Well, even the basic sample code which Pydroid gives me doesn't show me the graph :(
I've seen many tutorials which successfully showed graphs, but well, mine can't do that things.
Unfortunately, cannot grab any errors.
Using same code which worked at Windows, so don't think the code has problem.
Of course, matplotlib is installed, numpy is also installed.
If there's any possible problems, please let me know.
|
After reinstalling it worked.
The problem was that I forced Pydroid to update matplotlib via Terminal, not the official PIP tab.
The version of matplotlib was too high for pydroid
| 1.2 | true | 3 | 6,022 |
2019-03-30 18:02:10.470
|
Matplotlib with Pydroid 3 on Android: how to see graph?
|
I'm currently using an Android device (of Samsung), Pydroid 3.
I tried to see any graphs, but it doesn't works.
When I run the code, it just shows me a black-blank screen temporarily and then goes back to the source code editing window.
(means that i can't see even terminal screen, which always showed me [Program Finished])
Well, even the basic sample code which Pydroid gives me doesn't show me the graph :(
I've seen many tutorials which successfully showed graphs, but well, mine can't do that things.
Unfortunately, cannot grab any errors.
Using same code which worked at Windows, so don't think the code has problem.
Of course, matplotlib is installed, numpy is also installed.
If there's any possible problems, please let me know.
|
You just need to add a line
plt.show()
Then it will work. You can also save the file before showing
plt.savefig("*imageName*.png")
| 0 | false | 3 | 6,022 |
2019-03-31 02:36:13.693
|
Accidentally used homebrew to change my default python to 3.7, how do I change it back to 2.7?
|
I was trying to install python 3 because I wanted to work on a project using python 3. Instructions I'd found were not working, so I boldly ran brew install python. Wrong move. Now when I run python -V I get "Python 3.7.3", and when I try to enter a virtualenv I get -bash: /Users/elliot/Library/Python/2.7/bin/virtualenv: /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory
My ~/.bash_profile reads
export PATH="/Users/elliot/Library/Python/2.7/bin:/usr/local/opt/python/libexec/bin:/Library/PostgreSQL/10/bin:$PATH"
but ls /usr/local/Cellar/python/ gets me 3.7.3 so it seems like brew doesn't even know about my old 2.7 version anymore.
I think what I want is to reset my system python to 2.7, and then add python 3 as a separate python running on my system. I've been googling, but haven't found any advice on how to specifically use brew to do this.
Edit: I'd also be happy with keeping Python 3.7, if I knew how to make virtualenv work again. I remember hearing that upgrading your system python breaks everything, but I'd be super happy to know if that's outdated knowledge and I'm just being a luddite hanging on to 2.7.
|
So, I got through it by completely uninstalling Python, which I'd been reluctant to do, and then reinstalled Python 2. I had to update my path and open a new shell to get it to see the new python 2 installation, and things fell into place. I'm now using pyenv for my Python 3 project, and it's a dream.
| 0 | false | 1 | 6,023 |
2019-03-31 06:41:35.623
|
How does one transfer python code written in a windows laptop to a samsung android phone?
|
I created numerous python scripts on my pc laptop, and I want to run those scripts on my android phone. How can I do that? How can I move python scripts from my windows pc laptop, and use those python scripts on my samsung adroid phone?
I have downloaded qpython from the google playstore, but I still don't know how to get my pc python programs onto my phone. I heard some people talk about "ftp" but I don't even know what that means.
Thanks
|
Send them to yourself via email, then download the scripts onto your phone and run them through qpython.
However you have to realize not all the modules on python work on qpython so your scripts may not work the same when you transfer them.
| 0 | false | 2 | 6,024 |
2019-03-31 06:41:35.623
|
How does one transfer python code written in a windows laptop to a samsung android phone?
|
I created numerous python scripts on my pc laptop, and I want to run those scripts on my android phone. How can I do that? How can I move python scripts from my windows pc laptop, and use those python scripts on my samsung adroid phone?
I have downloaded qpython from the google playstore, but I still don't know how to get my pc python programs onto my phone. I heard some people talk about "ftp" but I don't even know what that means.
Thanks
|
you can use TeamViewer to control your android phone from your PC. And copy and paste the code easily.
or
You can transfer your scripts on your phone memory in the qpython folder and open it using qpython for android.
| 0 | false | 2 | 6,024 |
2019-04-01 16:47:12.417
|
how to find text before and after given words and output into different text files?
|
I have a text file like this:
...
NAME : name-1
...
NAME : name-2
...
...
...
NAME : name-n
...
I want output text files like this:
name_1.txt : NAME : name-1 ...
name_2.txt : NAME : name-2 ...
...
name_n.txt : NAME : name-n ...
I have the basic knowledge of grep, sed, awk, shell scripting, python.
|
With GNU sed:
sed "s/\(.*\)\(name-.*\)/echo '\1 \2' > \2.txt/;s/-/_/2e" input-file
Turn line NAME : name-2 into echo "NAME : name-2" > name-2.txt
Then replace the second - with _ yielding echo "NAME : name-2" > name_2.txt
e have the shell run the command constructed in the pattern buffer.
This outputs blank lines to stdout, but creates a file for each matching line.
This depends on the file having nothing but lines matching this format... but you can expand the gist here to skip other lines with n.
| 0 | false | 1 | 6,025 |
2019-04-02 09:36:07.253
|
Unable to parse the rows in ResultSet returned by connection.execute(), Python and SQLAlchemy
|
I have a task to compare data of two tables in two different oracle databases. We have access of views in both of db. Using SQLAlchemy ,am able to fetch rows from views but unable to parse it.
In one db the type of ID column is : Raw
In db where column type is "Raw", below is the row am getting from resultset .
(b'\x0b\x975z\x9d\xdaF\x0e\x96>[Ig\xe0/', 1, datetime.datetime(2011, 6, 7, 12, 11, 1), None, datetime.datetime(2011, 6, 7, 12, 11, 1), b'\xf2X\x8b\x86\x03\x00K|\x99(\xbc\x81n\xc6\xd3', None, 'I', 'Inactive')
ID Column data: b'\x0b\x975z\x9d\xdaF\x0e\x96>[_Ig\xe0/'
Actual data in ID column in database: F2588B8603004B7C9928BC816EC65FD3
This data is not complete hexadecimal format as it has some speical symbols like >|[_ etc. I want to know that how can I parse the data in ID column and get it as a string.
|
bytes.hex() solved the problem
| 1.2 | true | 1 | 6,026 |
2019-04-02 12:30:37.360
|
How to install Python packages from python3-apt in PyCharm on Windows?
|
I'm on Windows and want to use the Python package apt_pkg in PyCharm.
On Linux I get the package by doing sudo apt-get install python3-apt but how to install apt_pkg on Windows?
There is no such package on PyPI.
|
There is no way to run apt-get in Windows; the package format and the supporting infrastructure is very explicitly Debian-specific.
| 0.201295 | false | 1 | 6,027 |
2019-04-03 14:59:12.000
|
“Close and Halt” feature does not functioning in jupyter notebook launched under Canopy on macOs High Sierra
|
When I done with my work, I try to close my jupyter notebook via 'Close and Halt' under the file menu. However it somehow do not functioning.
I am running the notebook from Canopy, version: 2.1.9.3717, under macOs High Sierra.
|
If you are running Jupyter notebook from Canopy, then the Jupyter notebook interface is not controlling the kernel; rather, Canopy's built-in ipython Qtconsole is. You can restart the kernel from the Canopy run menu.
| 0.386912 | false | 1 | 6,028 |
2019-04-03 17:51:59.123
|
Running an external Python script on a Django site
|
I have a Python script which communicates with a Financial site through an API. I also have a Django site, i would like to create a basic form on my site where i input something and, according to that input, my Python script should perform some operations.
How can i do this? I'm not asking for any code, i just would like to understand how to accomplish this. How can i "run" a python script on a Django project? Should i make my Django project communicate with the script through a post request? Or is there a simpler way?
|
Since you don't want code, and you didn't get detailed on everything required required, here's my suggestion:
Make sure your admin.py file has editable fields for the model you're using.
Make an admin action,
Take the selected row with the values you entered, and run that action with the data you entered.
I would be more descriptive, but I'd need more details to do so.
| 0.386912 | false | 2 | 6,029 |
2019-04-03 17:51:59.123
|
Running an external Python script on a Django site
|
I have a Python script which communicates with a Financial site through an API. I also have a Django site, i would like to create a basic form on my site where i input something and, according to that input, my Python script should perform some operations.
How can i do this? I'm not asking for any code, i just would like to understand how to accomplish this. How can i "run" a python script on a Django project? Should i make my Django project communicate with the script through a post request? Or is there a simpler way?
|
I agree with @Daniel Roseman
If you are looking for your program to be faster, maybe multi-threading would be useful.
| 0 | false | 2 | 6,029 |
2019-04-04 02:39:46.887
|
Tracking any change in an table on SQL Server With Python
|
How are you today?
I'm a newbie in Python. I'm working with SQL server 2014 and Python 3.7. So, my issue is: When any change occurs in a table on DB, I want to receive a message (or event, or something like that) on my server (Web API - if you like this name).
I don't know how to do that with Python.
I have an practice (an exp. maybe). I worked with C# and SQL Server, and in this case, I used "SQL Dependency" method in C# to solve that. It's really good!
Have something like that in Python? Many thank for any idea, please!
Thank you so much.
|
I do not know many things about SQL. But I guess there are tools for SQL to detect those changes. And then you could create an everlasting loop thread using multithreading package to capture that change. (Remember to use time.sleep() to block your thread so that It wouldn't occupy the CPU for too long.) Once you capture the change, you could call the function that you want to use. (Actually, you could design a simple event engine to do that). I am a newbie in Computer Science and I hope my answer is correct and helpful. :)
| 0 | false | 1 | 6,030 |
2019-04-04 07:59:55.183
|
virtual real time limit (178/120s) reached
|
I am using ubuntu 16 version and running Odoo erp system 12.0 version.
On my application log file i see information says "virtual real time limit (178/120s) reached".
What exactly it means & what damage it can cause to my application?
Also how i can increase the virtual real time limit?
|
Open your config file and just add below parameter :
--limit-time-real=100000
| 0.986614 | false | 1 | 6,031 |
2019-04-04 15:23:10.660
|
How to handle multiple major versions of dependency
|
I'm wondering how to handle multiple major versions of a dependency library.
I have an open source library, Foo, at an early release stage. The library is a wrapper around another open source library, Bar. Bar has just launched a new major version. Foo currently only supports the previous version. As I'm guessing that a lot of people will be very slow to convert from the previous major version of Bar to the new major version, I'm reluctant to switch to the new version myself.
How is this best handled? As I see it I have these options
Switch to the new major version, potentially denying people on the old version.
Keep going with the old version, potentially denying people on the new version.
Have two different branches, updating both branches for all new features. Not sure how this works with PyPi. Wouldn't I have to release at different version numbers each time?
Separate the repository into two parts. Don't really want to do this.
The ideal solution for me would be to have the same code base, where I could have some sort of C/C++ macro-like thing where if the version is new, use new_bar_function, else use old_bar_function. When installing the library from PyPi, the already installed version of the major version dictates which version is used. If no version is installed, install the newest.
Would much appreciate some pointers.
|
Have two different branches, updating both branches for all new features. Not sure how this works with PyPI. Wouldn't I have to release at different version numbers each time?
Yes, you could have a 1.x release (that supports the old version) and a 2.x release (that supports the new version) and release both simultaneously. This is a common pattern for packages that want to introduce a breaking change, but still want to continue maintaining the previous release as well.
| 0.201295 | false | 1 | 6,032 |
2019-04-05 16:28:56.133
|
How do apply Q-learning to an OpenAI-gym environment where multiple actions are taken at each time step?
|
I have successfully used Q-learning to solve some classic reinforcement learning environments from OpenAI Gym (i.e. Taxi, CartPole). These environments allow for a single action to be taken at each time step. However I cannot find a way to solve problems where multiple actions are taken simultaneously at each time step. For example in the Roboschool Reacher environment, 2 torque values - one for each axis - must be specify at each time step. The problem is that the Q matrix is built from (state, action) pairs. However, if more than one action are taken simultaneously, it is not straightforward to build the Q matrix.
The book "Deep Reinforcement Learning Hands-On" by Maxim Lapan mentions this but does not give a clear answer, see quotation below.
Of course, we're not limited to a single action to perform, and the environment could have multiple actions, such as pushing multiple buttons simultaneously or steering the wheel and pressing two pedals (brake and accelerator). To support such cases, Gym defines a special container class that allows the nesting of several action spaces into one unified action.
Does anybody know how to deal with multiple actions in Q learning?
PS: I'm not talking about the issue "continuous vs discrete action space", which can be tackled with DDPG.
|
You can take one of two approaches - depend on the problem:
Think of the set of actions you need to pass to the environment as independent and make the network output actions values for each one (make softmax separately) - so if you need to pass two actions, the network will have two heads, one for each axis.
Think of them as dependent and look on the Cartesian product of the sets of actions, and then make the network to output value for each product - so if you have two actions that you need to pass and 5 options for each, the size of output layer will be 2*5=10, and you just use softmax on that.
| 0.673066 | false | 1 | 6,033 |
2019-04-06 19:50:46.103
|
How to install python3.6 in parallel with python 2.7 in Ubuntu 18
|
Setting up to start python for data analytics and want to install python 3.6 in Ubuntu 18.0 . Shall i run both version in parallel or overwrite 2.7 and how ? I am getting ambiguous methods when searched up.
|
Try pyenv and/or pipenv . Both are excellent tools to maintain local python installations.
| 0 | false | 1 | 6,034 |
2019-04-07 08:00:53.180
|
how to display the month in from view ? (Odoo11)
|
please, how do I display the month in the form? example:
07/04/2019 i want to change it in 07 april, 2019
Thank you in advance
|
Try with following steps:
Go to Translations > Languages
Open record with your current language.
Edit date format with %d %A, %Y
| 0.386912 | false | 1 | 6,035 |
2019-04-07 14:08:01.350
|
How to fix print((double parentheses)) after 2to3 conversion?
|
When migrating my project to Python 3 (2to3-3.7 -w -f print *), I observed that a lot of (but not all) print statements became print((...)), so these statements now print tuples instead of performing the expected behavior. I gather that if I'd used -p, I'd be in a better place right now because from __future__ import print_function is at the top of every affected module.
I'm thinking about trying to use sed to fix this, but before I break my teeth on that, I thought I'd see if anyone else has dealt with this before. Is there a 2to3 feature to clean this up?
I do use version control (git) and have commits immediately before and after (as well as the .bak files 2to3 creates), but I'm not sure how to isolate the changes I've made from the print situations.
|
If your code already has print() functions you can use the -x print argument to 2to3 to skip the conversion.
| 0.613357 | false | 1 | 6,036 |
2019-04-08 06:39:59.923
|
Windowed writes in python, e.g. to NetCDF
|
In python how can I write subsets of an array to disk, without holding the entire array in memory?
The xarray input/output docs note that xarray does not support incremental writes, only incremental reads except by streaming through dask.array. (Also that modifying a dataset only affects the in-memory copy, not the connected file.) The dask docs suggest it might be necessary to save the entire array after each manipulation?
|
This can be done using netCDF4 (the python library of low level NetCDF bindings). Simply assign to a slice of a dataset variable, and optionally call the dataset .sync() method afterward to ensure no delay before those changes are flushed to the file.
Note this approach also provides the opportunity to progressively grow a dimension of the array (by calling createDimension with size None, making it the first dimension of a variable, and iteratively assigning to incrementally larger indices along that dimension of the variable).
Although random-access window (i.e. subset) writes appear to require the lower level package, more systematic subset writes (eventually covering the entire array) can be done incrementally with xarray (by specifying a chunk size parameter to trigger use of the dask.array backend), and provided that your algorithm is refactored so that the main loop occurs in the dask/xarray store-to-file call. This means you will not have explicit control over the sequence in which chunks are generated and written.
| 0 | false | 1 | 6,037 |
2019-04-08 14:03:38.120
|
Is there any way to hide or encrypt your python code for edge devices? Any way to prevent reverse engineering of python code?
|
I am trying to make a smart IOT device (capable of performing smart Computer Vision operations, on the edge device itself). A Deep Learning algorithm (written in python) is implemented on Raspberry Pi. Now, while shipping this product (software + hardware) to my customer, I want that no one should log in to the raspberry pi and get access to my code. The flow should be something like, whenever someone logs into pi, there should be some kind of key that needs to be input to get access to code. But in that case how OS will get access to code and run it (without key). Then I may have to store the key on local. But still there is a chance to get access to key and get access to the code. I have applied a patent for my work and want to protect it.
I am thinking to encrypt my code (written in python) and just ship the executable version. I tried pyinstaller for it, but somehow there is a script available on the internet that can reverse engineer it.
Now I am little afraid as it can leak all my effort of 6 months at one go. Please suggest a better way of doing this.
Thanks in Advance.
|
Keeping the code on your server and using internet access is the only way to keep the code private (maybe). Any type of distributed program can be taken apart eventually. You can't (possibly shouldn't) try to keep people from getting inside devices they own and are in their physical possession. If you have your property under patent it shouldn't really matter if people are able to see the code as only you will be legally able to profit from it.
As a general piece of advice, code is really difficult to control access to. Trying to encrypt software or apply software keys to it or something like that is at best a futile attempt and at worst can often cause issues with software performance and usability. The best solution is often to link a piece of software with some kind of custom hardware device which is necessary and only you sell. That might not be possible here since you're using generic hardware but food for thought.
| -0.386912 | false | 1 | 6,038 |
2019-04-08 15:02:18.437
|
How to classify unlabelled data?
|
I am new to Machine Learning. I am trying to build a classifier that classifies the text as having a url or not having a url. The data is not labelled. I just have textual data. I don't know how to proceed with it. Any help or examples is appreciated.
|
Since it's text, you can use bag of words technique to create vectors.
You can use cosine similarity to cluster the common type text.
Then use classifier, which would depend on number of clusters.
This way you have a labeled training set.
If you have two cluster, binary classifier like logistic regression would work.
If you have multiple classes, you need to train model based on multinomial logistic regression
or train multiple logistic models using One vs Rest technique.
Lastly, you can test your model using k-fold cross validation.
| 0.201295 | false | 1 | 6,039 |
2019-04-08 16:54:25.427
|
Django - how to visualize signals and save overrides?
|
As a project grows, so do dependencies and event chains, especially in overridden save() methods and post_save and pre_save signals.
Example:
An overridden A.save creates two related objects to A - B and C. When C is saved, the post_save signal is invoked that does something else, etc...
How can these event chins be made more clear? Is there a way to visualize (generate automatically) such chains/flows? I'm not looking for ERD nor a Class diagram. I need to be sure that doing one thing one place won't affect something on the other side of the project, so simple visualization would be the best.
EDIT
To be clear, I know that it would be almost impossible to check dynamically generated signals. I just want to check all (not dynamically generated) post_save, pre_save, and overridden save methods and visualize them so I can see immediately what is happening and where when I save something.
|
(Too long to fit into a comment, lacking code to be a complete answer)
I can't mock up a ton of code right now, but another interesting solution, inspired by Mario Orlandi's comment above, would be some sort of script that scans the whole project and searches for any overridden save methods and pre and post save signals, tracking the class/object that creates them. It could be as simple as a series of regex expressions that look for class definitions followed by any overridden save methods inside.
Once you have scanned everything, you could use this collection of references to create a dependency tree (or set of trees) based on the class name and then topologically sort each one. Any connected components would illustrate the dependencies, and you could visualize or search these trees to see the dependencies in a very easy, natural way. I am relatively naive in django, but it seems you could statically track dependencies this way, unless it is common for these methods to be overridden in multiple places at different times.
| 0.454054 | false | 1 | 6,040 |
2019-04-09 07:36:02.467
|
Capturing time between HTML form submit action and printing response
|
I have a Python Flask application with a HTML form which accept few inputs from user, uses those in an python program which returns the processed values back to flask application return statement.
I wanted to capture the time took for whole processing and rendering output data on browser but not sure how to do that. At present I have captured the take by python program to process the input values but it doesn't account for complete time between "submit" action and rendering output data.
|
Use ajax request to submit form. Fetch the time on clicking the button and after getting the response and then calculate the difference.
| 0 | false | 1 | 6,041 |
2019-04-09 09:15:48.837
|
How to extract images from PDF or Word, together with the text around images?
|
I found there are some library for extracting images from PDF or word, like docx2txt and pdfimages. But how can I get the content around the images (like there may be a title below the image)? Or get a page number of each image?
Some other tools like PyPDF2 and minecart can extract image page by page. However, I cannot run those code successfully.
Is there a good way to get some information of the images? (from the image got from docx2txt or pdfimages, or another way to extract image with info)
|
docx2python pulls the images into a folder and leaves -----image1.png---- markers in the extracted text. This might get you close to where you'd like to go.
| 0 | false | 1 | 6,042 |
2019-04-09 18:46:58.267
|
What is this audio datatype and how do I convert it to wav/l16?
|
I am recording audio in a web browser and sending it to a flask backend. From there, I want to transcribe the audio using Watson Speech to Text. I cannot figure out what data format I'm receiving the audio and how to convert it to a format that works for watson.
I believe watson expects a bytestring like b'\x0c\xff\x0c\xffd. The data I receive from the browser looks like [ -4 -27 -34 -9 1 -8 -1 2 10 -28], which I can't directly convert to bytes because of the negative values (using bytes() gives me that error).
I'm really at a loss for what kind of conversion I need to be making here. Watson doesn't return any errors for any kind of data I throw at it just doesn't respond.
|
Those values should be fine, but you have to define how you want them stored before getting the bytes representation of them.
You'd simply want to convert those values to signed 2-byte/16-bit integers, then get the bytes representation of those.
| 1.2 | true | 1 | 6,043 |
2019-04-09 19:37:11.227
|
how do I implement ssim for loss function in keras?
|
I need SSIM as a loss function in my network, but my network has 2 outputs. I need to use SSIM for first output and cross-entropy for the next. The loss function is a combination of them. However, I need to have a higher SSIM and lower cross-entropy, so I think the combination of them isn't true. Another problem is that I could not find an implementation of SSIM in keras.
Tensorflow has tf.image.ssim, but it accepts the image and I do not think I can use it in loss function, right? Could you please tell me what should I do? I am a beginner in keras and deep learning and I do not know how can I make SSIM as a custom loss function in keras.
|
other choice would be
ssim_loss = 1 - tf.reduce_mean(tf.image.ssim(target, output, max_val=self.max_val))
then
combine_loss = mae (or mse) + ssim_loss
In this way, you are minimizing both of them.
| 0 | false | 1 | 6,044 |
2019-04-11 11:57:15.603
|
KMeans: Extracting the parameters/rules that fill up the clusters
|
I have created a 4-cluster k-means customer segmentation in scikit learn (Python). The idea is that every month, the business gets an overview of the shifts in size of our customers in each cluster.
My question is how to make these clusters 'durable'. If I rerun my script with updated data, the 'boundaries' of the clusters may slightly shift, but I want to keep the old clusters (even though they fit the data slightly worse).
My guess is that there should be a way to extract the paramaters that decides which case goes to their respective cluster, but I haven't found the solution yet.
I would appreciate any help
|
Got the answer in a different topic:
Just record the cluster means. Then when new data comes in, compare it to each mean and put it in the one with the closest mean.
| 0.386912 | false | 1 | 6,045 |
2019-04-11 13:25:10.313
|
how to count number of days via cron job in odoo 10?
|
I am setting up a script for counting number of days with passing each day in odoo.
How i can count day passing each day till end of the month.
For example : i have set two dates to find days between them.I need function which compare number of days with each passing day. When meet remaining day is 0 then will call a cron job.
|
Write a scheduled action that runs python code daily. The first thing that this code should do is to check the number of days you talk about and if it is 0, it should trigger whatever action it is needed.
| 0 | false | 1 | 6,046 |
2019-04-12 04:46:43.223
|
How to add reply(child comments) to comments on feed in getstream.io python
|
I am using getstream.io to create feeds. The user can follow feeds and add reaction like and comments. If a user adds a comment on feed and another wants to reply on the comment then how I can achieve this and also retrieve all reply on the comment.
|
you can add the child reaction by using reaction_id
| 0 | false | 1 | 6,047 |
2019-04-12 12:06:29.347
|
how to find the similarity between two documents
|
I have tried using the similarity function of spacy to get the best matching sentence in a document. However it fails for bullet points because it considers each bullet as the a sentence and the bullets are incomplete sentences (eg sentence 1 "password should be min 8 characters long , sentence 2 in form of a bullet " 8 characters"). It does not know it is referring to password and so my similarity comes very low.
|
Bullets are considered but the thing is it doesn't understand who 8 characters is referring to so I thought of finding the heading of the paragraph and replacing the bullets with it
I found the headings using python docs but it doesn't read bullets while reading the document ,is there a way I can read it using python docs ?
Is there any way I can find the headings of a paragraph in spacy?
Is there a better approach for it
| 0 | false | 2 | 6,048 |
2019-04-12 12:06:29.347
|
how to find the similarity between two documents
|
I have tried using the similarity function of spacy to get the best matching sentence in a document. However it fails for bullet points because it considers each bullet as the a sentence and the bullets are incomplete sentences (eg sentence 1 "password should be min 8 characters long , sentence 2 in form of a bullet " 8 characters"). It does not know it is referring to password and so my similarity comes very low.
|
Sounds to me like you need to do more text processing before attempting to use similarity. If you want bullet points to be considered part of a sentence, you need to modify your spacy pipeline to understand to do so.
| 0 | false | 2 | 6,048 |
2019-04-12 13:48:58.717
|
Trying to Import Infoblox Module in Python
|
I am trying to write some code in python to retrieve some data from Infoblox. To do this i need to Import the Infoblox Module.
Can anyone tell me how to do this ?
|
Before you can import infoblox you need to install it:
open a command prompt (press windows button, then type cmd)
if you are working in a virtual environment access it with activate yourenvname (otherwise skip this step)
execute pip install infoblox to install infoblox, then you should be fine
to test it from the command prompt, execute python, and then try executing import infoblox
The same process works for basically every package.
| 0 | false | 1 | 6,049 |
2019-04-12 21:52:02.810
|
Why do I keep getting this error when trying to create a virtual environment with Python 3 on MacOS?
|
So I'm following a book that's teaching me how to make a Learning Log using Python and the Django web framework. I was asked to go to a terminal and create a directory called "learning_log" and change the working directory to "learning_log" (did that with no problems). However, when I try to create the virtual environment, I get an error (seen at the bottom of this post). Why am I getting this error and how can I fix this to move forward in the book?
I already tried installing a virtualenv with pip and pip3 (as the book prescribed). I was then instructed to enter the command:
learning_log$ virtualenv ll_env
And I get:
bash: virtualenv: command not found
Since I'm using Python3.6, I tried:
learning_log$ virtualenv ll_env --python=python3
And I still get:
bash: virtualenv: command not found
Brandons-MacBook-Pro:learning_log brandondusch$ python -m venv ll_env
Error: Command '['/Users/brandondusch/learning_log/ll_env/bin/python', '-Im', 'ensurepip', '--upgrade', '-
-default-pip']' returned non-zero exit status 1.
|
For Ubuntu:
The simple is if virtualenv --version returns something like virtualenv: command not found and which virtualenv prints nothing on the console, then virtualenv is not installed on your system. Please try to install using pip3 install virtualenv or sudo apt-get install virtualenv but this one might install a bit older one.
EDIT
For Mac:
For Mac, you need to install that using sudo pip install virtualenv after you have installed Python3 on your Mac.
| 0 | false | 2 | 6,050 |
2019-04-12 21:52:02.810
|
Why do I keep getting this error when trying to create a virtual environment with Python 3 on MacOS?
|
So I'm following a book that's teaching me how to make a Learning Log using Python and the Django web framework. I was asked to go to a terminal and create a directory called "learning_log" and change the working directory to "learning_log" (did that with no problems). However, when I try to create the virtual environment, I get an error (seen at the bottom of this post). Why am I getting this error and how can I fix this to move forward in the book?
I already tried installing a virtualenv with pip and pip3 (as the book prescribed). I was then instructed to enter the command:
learning_log$ virtualenv ll_env
And I get:
bash: virtualenv: command not found
Since I'm using Python3.6, I tried:
learning_log$ virtualenv ll_env --python=python3
And I still get:
bash: virtualenv: command not found
Brandons-MacBook-Pro:learning_log brandondusch$ python -m venv ll_env
Error: Command '['/Users/brandondusch/learning_log/ll_env/bin/python', '-Im', 'ensurepip', '--upgrade', '-
-default-pip']' returned non-zero exit status 1.
|
I had the same error. I restarted my computer and tried it again, but the error was still there. Then I tried python3 -m venv ll_env and it moved forward.
| 0 | false | 2 | 6,050 |
2019-04-13 11:57:31.263
|
How do I calculate the similarity of a word or couple of words compared to a document using a doc2vec model?
|
In gensim I have a trained doc2vec model, if I have a document and either a single word or two-three words, what would be the best way to calculate the similarity of the words to the document?
Do I just do the standard cosine similarity between them as if they were 2 documents? Or is there a better approach for comparing small strings to documents?
On first thought I could get the cosine similarity from each word in the 1-3 word string and every word in the document taking the averages, but I dont know how effective this would be.
|
There's a number of possible approaches, and what's best will likely depend on the kind/quality of your training data and ultimate goals.
With any Doc2Vec model, you can infer a vector for a new text that contains known words – even a single-word text – via the infer_vector() method. However, like Doc2Vec in general, this tends to work better with documents of at least dozens, and preferably hundreds, of words. (Tiny 1-3 word documents seem especially likely to get somewhat peculiar/extreme inferred-vectors, especially if the model/training-data was underpowered to begin with.)
Beware that unknown words are ignored by infer_vector(), so if you feed it a 3-word documents for which two words are unknown, it's really just inferring based on the one known word. And if you feed it only unknown words, it will return a random, mild initialization vector that's undergone no inference tuning. (All inference/training always starts with such a random vector, and if there are no known words, you just get that back.)
Still, this may be worth trying, and you can directly compare via cosine-similarity the inferred vectors from tiny and giant documents alike.
Many Doc2Vec modes train both doc-vectors and compatible word-vectors. The default PV-DM mode (dm=1) does this, or PV-DBOW (dm=0) if you add the optional interleaved word-vector training (dbow_words=1). (If you use dm=0, dbow_words=0, you'll get fast training, and often quite-good doc-vectors, but the word-vectors won't have been trained at all - so you wouldn't want to look up such a model's word-vectors directly for any purposes.)
With such a Doc2Vec model that includes valid word-vectors, you could also analyze your short 1-3 word docs via their individual words' vectors. You might check each word individually against a full document's vector, or use the average of the short document's words against a full document's vector.
Again, which is best will likely depend on other particulars of your need. For example, if the short doc is a query, and you're listing multiple results, it may be the case that query result variety – via showing some hits that are really close to single words in the query, even when not close to the full query – is as valuable to users as documents close to the full query.
Another measure worth looking at is "Word Mover's Distance", which works just with the word-vectors for a text's words, as if they were "piles of meaning" for longer texts. It's a bit like the word-against-every-word approach you entertained – but working to match words with their nearest analogues in a comparison text. It can be quite expensive to calculate (especially on longer texts) – but can sometimes give impressive results in correlating alternate texts that use varied words to similar effect.
| 1.2 | true | 1 | 6,051 |
2019-04-14 15:12:05.933
|
operations order in Python
|
I have just now started learning python from Learn Python 3 The Hard Way by Zed Shaw. In exercise 3 of the book, there was a problem to get the value of 100 - 25 * 3 % 4. The solution to this problem is already mentioned in the archives, in which the order preference is given to * and %(from left to right).
I made a problem on my own to get the value of 100 - 25 % 3 + 4. The answer in the output is 103.
I just wrote: print ("the value of", 100 - 25 % 3 + 4), which gave the output value 103.
If the % is given the preference 25 % 3 will give 3/4. Then how the answer is coming 103. Do I need to mention any float command or something?
I would like to know how can I use these operations. Is there any pre-defined rule to solve these kinds of problems?
|
The % operator is used to find the remainder of a quotient. So 25 % 3 = 1 not 3/4.
| 0 | false | 2 | 6,052 |
2019-04-14 15:12:05.933
|
operations order in Python
|
I have just now started learning python from Learn Python 3 The Hard Way by Zed Shaw. In exercise 3 of the book, there was a problem to get the value of 100 - 25 * 3 % 4. The solution to this problem is already mentioned in the archives, in which the order preference is given to * and %(from left to right).
I made a problem on my own to get the value of 100 - 25 % 3 + 4. The answer in the output is 103.
I just wrote: print ("the value of", 100 - 25 % 3 + 4), which gave the output value 103.
If the % is given the preference 25 % 3 will give 3/4. Then how the answer is coming 103. Do I need to mention any float command or something?
I would like to know how can I use these operations. Is there any pre-defined rule to solve these kinds of problems?
|
Actually, the % operator gives you the REMAINDER of the operation.
Therefore, 25 % 3 returns 1, because 25 / 3 = 8 and the remainder of this operation is 1.
This way, your operation 100 - 25 % 3 + 4 is the same as 100 - 1 + 4 = 103
| 1.2 | true | 2 | 6,052 |
2019-04-14 20:15:45.423
|
Comparing feature extractors (or comparing aligned images)
|
I'd like to compare ORB, SIFT, BRISK, AKAZE, etc. to find which works best for my specific image set. I'm interested in the final alignment of images.
Is there a standard way to do it?
I'm considering this solution: take each algorithm, extract the features, compute the homography and transform the image.
Now I need to check which transformed image is closer to the target template.
Maybe I can repeat the process with the target template and the transformed image and look for the homography matrix closest to the identity but I'm not sure how to compute this closeness exactly. And I'm not sure which algorithm should I use for this check, I suppose a fixed one.
Or I could do some pixel level comparison between the images using a perceptual difference hash (dHash). But I suspect the the following hamming distance may not be very good for images that will be nearly identical.
I could blur them and do a simple subtraction but sounds quite weak.
Thanks for any suggestions.
EDIT: I have thousands of images to test. These are real world pictures. Images are of documents of different kinds, some with a lot of graphics, others mostly geometrical. I have about 30 different templates. I suspect different templates works best with different algorithms (I know in advance the template so I could pick the best one).
Right now I use cv2.matchTemplate to find some reference patches in the transformed images and I compare their locations to the reference ones. It works but I'd like to improve over this.
|
From your question, it seems like the task is not to compare the feature extractors themselves, but rather to find which type of feature extractor leads to the best alignment.
For this, you need two things:
a way to perform the alignment using the features from different extractors
a way to check the accuracy of the alignment
The algorithm you suggested is a good approach for doing the alignment. To check if accuracy, you need to know what is a good alignment.
You may start with an alignment you already know. And the easiest way to know the alignment between two images is if you made the inverse operation yourself. For example, starting with one image, you rotate it some amount, you translate/crop/scale or combine all this operations. Knowing how you obtained the image, you can obtain your ideal alignment (the one that undoes your operations).
Then, having the ideal alignment and the alignment generated by your algorithm, you can use one metric to evaluate its accuracy, depending on your definition of "good alignment".
| 1.2 | true | 1 | 6,053 |
2019-04-15 15:05:35.363
|
How to get access to django database from other python program?
|
I have django project in which I can display records from raspberry pi device. I had mysql database and i have send records from raspberry there. I can display it via my api, but I want to work on this records.I want to change this to django database but I don't know how I can get access to django database which is on VPS server from raspberry pi device.
|
ALERT: THIS CAN LEAD TO SECURITY ISSUES
A Django database is no different from any other database. In this case a MySQL.
The VPS server where the MySQL is must have a public IP, the MySQL must be listening on that IP (if the VPS has a public IP but MySQL is not listening/bind on that IP, it won't work) and the port of the MySQL open (default is 3306), then you can connect to that database from any program with the required configurations params (host, port, user, password,...).
I'm not a sysadmin expert, but having a MySQL on a public IP is a security hole. So the best approach IMO is to expose the operations you want to do via API with Django.
| 1.2 | true | 1 | 6,054 |
2019-04-16 11:37:49.557
|
What happened after entered " flask run" on a terminal under the project directory?
|
What happened after entered "flask run" on a terminal under the project directory?
How the python interpreter gets the file of flask.__main__.py and starts running project's code?
I know how Flask locates app. What I want to figure out is how command line instruction "flask run" get the flask/__main__.py bootup
|
flask is a Python script. Since you stated you are not a beginner, you should simply open the file (/usr/bin/flask) in your favorite text editor and start from there. There is no magic under the hood.
| 1.2 | true | 1 | 6,055 |
2019-04-17 08:02:38.757
|
what's the difference between airflow's 'parallelism' and 'dag_concurrency'
|
I can't understand the difference between dag_concurrency and parallelism. documentation and some of the related posts here somehow contradicts my findings.
The understanding I had before was that the parallelism parameter allows you to set the MAX number of global(across all DAGs) TaskRuns possible in airflow and dag_concurrency to mean the MAX number of TaskRuns possible for a single Dag.
So I set the parallelism to 8 and dag_concurrency to 4 and ran a single Dag. And I found out that it was running 8 TIs at a time but I was expecting it to run 4 at a time.
How is that possible?
Also, if it helps, I have set the pool size to 10 or so for these tasks. But that shouldn't have mattered as "config" parameters are given higher priorities than the pool's, Right?
|
The other answer is only partially correct:
dag_concurrency does not explicitly control tasks per worker. dag_concurrency is the number of tasks running simultaneously per dag_run. So if your DAG has a place where 10 tasks could be running simultaneously but you want to limit the traffic to the workers you would set dag_concurrency lower.
The queues and pools setting also have an effect on the number of tasks per worker.
These setting are very important as you start to build large libraries of simultaneously running DAGs.
parallelism is the maximum number of tasks across all the workers and DAGs.
| 0.986614 | false | 1 | 6,056 |
2019-04-17 15:40:35.510
|
how do i fix "No module named 'win32api'" on python2.7
|
I am trying to import win32api in python 2.7.9. i did the "pip install pypiwin32" and made sure all the files were intalled correctly (i have the win32api.pyd under ${PYTHON_HOME}\Lib\site-packages\win32). i also tried coping the files from C:\Python27\Lib\site-packages\pywin32_system32 to C:\Python27\Lib\site-packages\win32. I also tried restarting my pc after each of these steps but nothing seems to work! i still get the error 'No module named 'win32api''
|
Well, turns out the answer is upgrading my python to 3.6.
python 2.7 seems to old to work with outside imports (I'm just guessing here, because its not the first time I'm having an import problem)
hope it helps :)
| 1.2 | true | 1 | 6,057 |
2019-04-17 15:55:52.440
|
paste code to Jupyter notebook without symbols
|
I tried to paste few lines code from online sources with the symbol like ">>>". My question is how to paste without these symbols?
(Line by line works but it will be very annoying if pasting a big project.)
Cheers
|
Go to Edit > Find and Replace, in which find for >>> and replace with empty. Enjoy :)
| 0 | false | 1 | 6,058 |
2019-04-17 19:02:36.187
|
How can python iterate over a set if no order is defined?
|
So I notice that we say in python that sets have no order or arrangement, although of course you can sort the list generated from a set.
So I was wondering how the iteration over a set is defined in python. Does it just follow the sorted list ordering, or is there some other footgun that might crop up at some point?
Thanks.
|
A temporary order is used to iterate over the set, but you can't reliably predict it (practically speaking, as it depends on the insertion and deletion history of the set). If you need a specific order, use a list.
| 1.2 | true | 1 | 6,059 |
2019-04-18 06:31:12.170
|
How to triangulate a point in 3D space, given coordinate points in 2 image and extrinsic values of the camera
|
I'm trying to write a function that when given two cameras, their rotation, translation matrices, focal point, and the coordinates of a point for each camera, will be able to triangulate the point into 3D space. Basically, given all the extrinsic/intrinsic values needed
I'm familiar with the general idea: to somehow create two rays and find the closest point that satisfies the least squares problem, however, I don't know exactly how to translate the given information to a series of equations to the coordinate point in 3D.
|
Assume you have two cameras -- camera 1 and camera 2.
For each camera j = 1, 2 you are given:
The distance hj between it's center Oj, (is "focal point" the right term? Basically the point Oj from which the camera is looking at its screen) and the camera's screen. The camera's coordinate system is centered at Oj, the Oj--->x and Oj--->y axes are parallel to the screen, while the Oj--->z axis is perpendicular to the screen.
The 3 x 3 rotation matrix Uj and the 3 x 1 translation vector Tj which transforms the Cartesian 3D coordinates with respect to the system of camera j (see point 1) to the world-coordinates, i.e. the coordinates with respect to a third coordinate system from which all points in the 3D world are described.
On the screen of camera j, which is the plane parallel to the plane Oj-x-y and at a distance hj from the origin Oj, you have the 2D coordinates (let's say the x,y coordinates only) of point pj, where the two points p1 and p2 are in fact the projected images of the same point P, somewhere in 3D, onto the screens of camera 1 and 2 respectively. The projection is obtained by drawing the 3D line between point Oj and point P and defining point pj as the unique intersection point of this line with with the screen of camera j. The equation of the screen in camera j's 3D coordinate system is z = hj , so the coordinates of point pj with respect to the 3D coordinate system of camera j look like pj = (xj, yj, hj) and so the 2D screen coordinates are simply pj = (xj, yj) .
Input: You are given the 2D points p1 = (x1, y1), p2 = (x2, y2) , the twp cameras' focal distances h1, h2 , two 3 x 3 rotation matrices U1 and U2, two translation 3 x 1 vector columns T1 and T2 .
Output: The coordinates P = (x0, y0, z0) of point P in the world coordinate system.
One somewhat simple way to do this, avoiding homogeneous coordinates and projection matrices (which is fine too and more or less equivalent), is the following algorithm:
Form Q1 = [x1; y1; h1] and Q2 = [x2; y2; h2] , where they are interpreted as 3 x 1 vector columns;
Transform P1 = U1*Q1 + T1 and P2 = U1*Q2 + T1 , where * is matrix multiplication, here it is a 3 x 3 matrix multiplied by a 3 x 1 column, givin a 3 x 1 column;
Form the lines X = T1 + t1*(P1 - T1) and X = T2 + t2*(P2 - T2) ;
The two lines from the preceding step 3 either intersect at a common point, which is the point P or they are skew lines, i.e. they do not intersect but are not parallel (not coplanar).
If the lines are skew lines, find the unique point X1 on the first line and the uniqe point X2 on the second line such that the vector X2 - X1 is perpendicular to both lines, i.e. X2 - X1 is perpendicular to both vectors P1 - T1 and P2 - T2. These two point X1 and X2 are the closest points on the two lines. Then point P = (X1 + X2)/2 can be taken as the midpoint of the segment X1 X2.
In general, the two lines should pass very close to each other, so the two points X1 and X2 should be very close to each other.
| 0 | false | 1 | 6,060 |
2019-04-18 14:54:16.823
|
Understanding execution_date in Airflow
|
I am running an airflow DAG and wanted to understand how the execution date gets set. This is the code I am running:
{{ execution_date.replace(day=1).strftime("%Y-%m-%d") }}
This always returns the first day of the month. This is the functionality that I want, but I just want to find a way to understand what is happening.
|
execution_date returns a datatime object. You are using the replace method of that object to replace the “day” with the first. Then outputting that to a string with the format method.
| 0 | false | 2 | 6,061 |
2019-04-18 14:54:16.823
|
Understanding execution_date in Airflow
|
I am running an airflow DAG and wanted to understand how the execution date gets set. This is the code I am running:
{{ execution_date.replace(day=1).strftime("%Y-%m-%d") }}
This always returns the first day of the month. This is the functionality that I want, but I just want to find a way to understand what is happening.
|
The reason this always returns the first of the month is that you are using a Replace to ensure the day is forced to be the 1st of the month. Simply remove ".replace(day=1)".
| 1.2 | true | 2 | 6,061 |
2019-04-19 10:22:36.757
|
How to convert file .py to .exe, having Python from Anaconda Navigator? (in which command prompt should I write installation codes?)
|
I created a Python script (format .py) that works.
I would like to convert this file to .exe, to use it in a computer without having Python installed.
How can I do?
I have Python from Anaconda3.
What can I do?
Thank you!
I followed some instruction found here on Stackoverflow.
.I modify the Path in the 'Environment variables' in the windows settings, edited to the Anaconda folder.
.I managed to install pip in conda prompt (I guess).
Still, nothing is working. I don't know how to proceed and in general how to do things properly.
|
I personaly use pyinstaller, its available from pip.
But it will not really compile, it will just bundle.
The difference is compiling means translating to real machine code while bundling is creating a big exe file with all your libs and your python interpreter.
Even if pyinstaller create bigger file and is slower than cython (at execution), I prefer it because it work all the time without work (except lunching it).
| 1.2 | true | 1 | 6,062 |
2019-04-19 16:25:54.117
|
How can i install opencv in python3.7 on ubuntu?
|
I have a Nvidia Jetson tx2 with the orbitty shield on it.
I got it from a friend who worked on it last year. It came with ubuntu 16.04. I updated everything on it and i installed the latest python3.7 and pip.
I tried checking the version of opencv to see what i have but when i do import cv2 it gives me :
Traceback (most recent call last):
File "", line 1, in
ModuleNotFoundError: No module named 'cv2'
Somehow besides python3.7 i have python2.7 and python3.5 installed. If i try to import cv2 on python2.7 and 3.5 it works, but in 3.7 it doesn't.
Can u tell me how can i install opencv in python3.7 and the latest version?
|
Does python-3.7 -m pip install opencv-python work? You may have to change the python-3.7 to whatever path/alias you use to open your own python 3.7.
| 0 | false | 1 | 6,063 |
2019-04-20 21:40:23.537
|
Negative Feature Importance Value in CatBoost LossFunctionChange
|
I am using CatBoost for ranking task. I am using QueryRMSE as my loss function. I notice for some features, the feature importance values are negative and I don't know how to interpret them.
It says in the documentation, the i-th feature importance is calculated as the difference between loss(model with i-th feature excluded) - loss(model).
So a negative feature importance value means that feature makes my loss go up?
What does that suggest then?
|
Negative feature importance value means that feature makes the loss go up. This means that your model is not getting good use of this feature. This might mean that your model is underfit (not enough iteration and it has not used the feature enough) or that the feature is not good and you can try removing it to improve final quality.
| 1.2 | true | 1 | 6,064 |
2019-04-23 04:00:09.300
|
cv2 - multi-user image display
|
Using python and OpenCV, is it possible to display the same image on multiple users?
I am using cv2.imshow but it only displays the image for the user that runs the code.
Thanks
|
I was able to display the images on another user/host by setting the DISPLAY environment variable of the X server to match the desired user's DISPLAY.
| 0 | false | 1 | 6,065 |
2019-04-23 06:19:55.700
|
Move turtle slightly closer to random coordinate on each update
|
I'm doing a homework and I want to know how can I move turtle to a random location a small step each time. Like can I use turtle.goto() in a slow motion?
Someone said I should use turtle.setheading() and turtle.forward() but I'm confused on how to use setheading() when the destination is random.
I'm hoping the turtle could move half radius (which is 3.5) each time I update the program to that random spot.
|
Do you mean that you want to move a small step, stop, and repeat? If so, you can ‘import time’ and add ‘time.sleep(0.1)’ after each ‘forward’
| 0 | false | 1 | 6,066 |
2019-04-23 15:17:02.483
|
Python append single bit to bytearray
|
I have a bytearray containing some bytes, it currently look like this (Converted to ASCII):
['0b1100001', '0b1100010', '0b1100011', '0b10000000']
I need to add a number of 0 bits to this, is that possible or would I have to add full bytes? If so, how do I do that?
|
Where do you need the bits added to? Each element of your list or an additional element that contains all 0's?
The former:
myList[0] = myList[0] * 2 # ASL
The later
myList.append(0b000000)
| 0.135221 | false | 1 | 6,067 |
2019-04-23 19:50:59.320
|
How to access _pycache_ directory
|
I want to remove a folder, but I can’t get in pycache to delete the pyc and pyo$ files. I have done it before, but I don’t know how I did it.
|
If you want to remove your python file artifacts, such as the .pyc and .pyo cache files, maybe you could try the following:
Move into your project's root directory
cd <path_to_project_root>
Remove python file artifacts
find . -name '*.pyc' -exec rm -f {} +
find . -name '*.pyo' -exec rm -f {} +
Hopefully that helps!
| 0 | false | 1 | 6,068 |
2019-04-26 07:18:59.020
|
numerical entity extraction from unstructured texts using python
|
I want to extract numerical entities like temperature and duration mentioned in unstructured formats of texts using neural models like CRF using python. I would like to know how to proceed for numerical extraction as most of the examples available on the internet are for specific words or strings extraction.
Input: 'For 5 minutes there, I felt like baking in an oven at 350 degrees F'
Output: temperature: 350
duration: 5 minutes
|
So far my research shows that you can treat numbers as words.
This raises an issue : learning 5 will be ok, but 19684 will be to rare to be learned.
One proposal is to convert into words. "nineteen thousands six hundred eighty four" and embedding each word. The inconvenient is that you are now learning a (minimum) 6 dimensional vector (one dimension per word)
Based on your usage, you can also embed 0 to 3000 with distinct ids, and say 3001 to 10000 will map id 3001 in your dictionary, and then add one id in your dictionary for each 10x.
| 0 | false | 1 | 6,069 |
2019-04-26 14:50:51.197
|
Python Azure webjob passing parameters
|
I have a Python WebJob living in Azure and I'm trying to pass parameters to it.
I've found documentation saying that I should be able to post the URL and add:?arguments={'arg1', 'arg2'} after it.
However, when I do that and then try to print(sys.argv) in my code, it's only printing the name of the Python file and none of the arguments I pass to it.
How do I get the arguments to pass to my Python code? I am also using a run.cmd in my Azure directory to trigger my Python code if that makes a difference.
UPDATE: So I tested it in another script without the run.cmd and that certainly is the problem. If I just do ?arguments=22 66 it works. So how do I pass parameters when I'm using a run.cmd file?
|
I figured it out: in the run.cmd file, you need to put "%*" after your script name and it will detect any arguments you passed in the URL.
| 1.2 | true | 1 | 6,070 |
2019-04-27 17:03:24.327
|
What happens after shutting down the PC via subprocess?
|
I try to turn my pc off and restart it on LAN.
When getting one of the commands (turnoff or restart), I execute one of the followings:
subprocess.call(["shutdown", "-f", "-s", "-y"]) # Turn off
subprocess.call(["shutdown", "-f", "-r", "-t", "-c", "-y"]) # Restart
I'd like to inform the other side if the process was successfully initiated, and if the PC is in the desired state.
I know that it is possible to implement a function which will check if the PC is alive (which is a pretty good idea) several seconds after executing the commands, but how one can know how many seconds are needed? And what if the PC will be shut down a moment after sending a message stating that it is still alive?
I'm curious to know- what really happens after those commands are executed? Will the script keep running until the task manager will kill it? Will it stop running right after the command?
|
Programs like shutdown merely send a message to init (or whatever modern replacement) and exit immediately; it’s up to it what happens next. Typical Unix behavior is to first shut down things like SSH servers (which probably doesn’t kill your connection to the machine), then send SIGTERM to all processes, wait a few seconds (5 is typical) for signal handlers to run, and then send SIGKILL to any survivors. Finally, filesystems are unmounted and the hardware halt or reboot happens.
While there’s no guarantee that the first phase takes long enough for you to report successful shutdown execution, it generally will; if it’s a concern, you can catch the SIGTERM to buy yourself those few extra seconds to get the message out.
| 1.2 | true | 1 | 6,071 |
2019-04-27 20:18:09.337
|
How can I create a vCard qrcode with pyqrcode?
|
I am trying to generate a vCard QR code with the pyqrcode library but I cannot figure out the way to do it.
I have read their documentation 5 times and it doesn't say anything about vCard, only about URL and on the internet, I could found only about wifi. Does anybody know how can I do it?
I want to make a vCard QR code and afterward to display it on django web page.
|
Let's say :
We've two libraries:
pyqrcode : QR reader / writer
vobject : vCard serializer / deserializer
Flow:
a. Generate a QR img from "some" web site :
web site send JSON info => get info from JSON and serialize using vobject to obtain a vcard string => pyqrcode.create(vcard string)
b. Show human redeable info from QR img :
pyqrcode read an QR img ( created from a. ) => deserialize using vobject to obtain a JSON => show info parsing JSON in the web site.
OR... after deserialize using vobject you can write a .vcard file
| 1.2 | true | 1 | 6,072 |
2019-04-28 15:35:56.843
|
Cloud SQL/NiFi: Connect to cloud sql database with python and NiFi
|
So, I am doing a etl process in which I use Apache NiFi as an etl tool along with a postgresql database from google cloud sql to read csv file from GCS. As a part of the process, I need to write a query to transform data read from csv file and insert to the table in the cloud sql database. So, based on NIFi, I need to write a python to execute a sql queries automatically on a daily basis. But the question here is that how can I write a python to connect with the cloud sql database? What config that should be done? I have read something about cloud sql proxy but can I just use an cloud sql instance's internal ip address and put it in some config file and creating some dbconnector out of it?
Thank you
Edit: I can connect to cloud sql database from my vm using psql -h [CLOUD_SQL_PRIVATE_IP_ADDR] -U postgres but I need to run python script for the etl process and there's a part of the process that need to execute sql. What I am trying to ask is that how can I write a python file that use for executing the sql
e.g. In python, query = 'select * from table ....' and then run
postgres.run_sql(query) which will execute the query. So how can I create this kind of executor?
|
I don't understand why you need to write any code in Python? I've done a similar process where I used GetFile (locally) to read a CSV file, parse and transform it, and then used ExecuteSQLRecord to insert the rows into a SQL server (running on a cloud provider). The DBCPConnectionPool needs to reference your cloud provider as per their connection instructions. This means the URL likely reference something.google.com and you may need to open firewall rules using your cloud provider administration.
| 0 | false | 1 | 6,073 |
2019-04-29 12:55:44.847
|
Keras / NN - Handling NaN, missing input
|
These days i'm trying to teach myself machine learning and i'm going though some issues with my dataset.
Some of my rows (i work with csv files that i create with some js script, i feel more confident doing that in js) are empty wich is normal as i'm trying to build some guessing model but the issue is that it results in having nan values on my training set.
My NN was not training so i added a piece of code to remove them from my set but now i have some issues where my model can't work with input from different size..
So my question is: how do i handle missing data ? (i basically have 2 rows and can only have the value from 1 and can't merge them as it will not give good results)
i can remove it from my set, wich would reduce the accuracy of my model in the end.
PS: if needed i'll post some code when i come back home.
|
You need to have the same input size during training and inference. If you have a few missing values (a few %), you can always choose to replace the missing values by a 0 or by the average of the column. If you have more missing values (more than 50%) you are probably better off ignoring the column completely. Note that this theoretical, the best way to make it work is to try different strategies on your data.
| 1.2 | true | 1 | 6,074 |
2019-04-30 15:36:33.210
|
Doc2Vec - Finding document similarity in test data
|
I am trying to train a doc2vec model using training data, then finding the similarity of every document in the test data for a specific document in the test data using the trained doc2vec model. However, I am unable to determine how to do this.
I currently using model.docvecs.most_similar(...). However, this function only finds the similarity of every document in the training data for a specific document in the test data.
I have tried manually comparing the inferred vector of a specific document in the test data with the inferred vectors of every other document in the test data using model.docvecs.n_similarity(inferred_vector.tolist(), testvectors[i].tolist()) but this returns KeyError: "tag '-0.3502606451511383' not seen in training corpus/invalid" as there are vectors not in the dictionary.
|
The act of training-up a Doc2Vec model leaves it with a record of the doc-vectors learned from the training data, and yes, most_similar() just looks among those vectors.
Generally, doing any operations on new documents that weren't part of training will require the use of infer_vector(). Note that such inference:
ignores any unknown words in the new document
may benefit from parameter tuning, especially for short documents
is currently done just one document at time in a single thread – so, acquiring inferred-vectors for a large batch of N-thousand docs can actually be slower than training a fresh model on the same N-thousand docs
isn't necessarily deterministic, unless you take extra steps, because the underlying algorithms use random initialization and randomized selection processes during training/inference
just gives you the vector, without loading it into any convenient storage-object for performing further most_similar()-like comparisons
On the other hand, such inference from a "frozen" model can be parallelized across processes or machines.
The n_similarity() method you mention isn't really appropriate for your needs: it's expecting lists of lookup-keys ('tags') for existing doc-vectors, not raw vectors like you're supplying.
The similarity_unseen_docs() method you mention in your answer is somewhat appropriate, but just takes a pair of docs, re-calculating their vectors each time – somewhat wasteful if a single new document's doc-vector needs to be compared against many other new documents' doc-vectors.
You may just want to train an all-new model, with both your "training documents" and your "test documents". Then all the "test documents" get their doc-vectors calculated, and stored inside the model, as part of the bulk training. This is an appropriate choice for many possible applications, and indeed could learn interesting relationships based on words that only appear in the "test docs" in a totally unsupervised way. And there's not yet any part of your question that gives reasons why it couldn't be considered here.
Alternatively, you'd want to infer_vector() all the new "test docs", and put them into a structure like the various KeyedVectors utility classes in gensim - remembering all the vectors in one array, remembering the mapping from doc-key to vector-index, and providing an efficient bulk most_similar() over the set of vectors.
| 1.2 | true | 2 | 6,075 |
2019-04-30 15:36:33.210
|
Doc2Vec - Finding document similarity in test data
|
I am trying to train a doc2vec model using training data, then finding the similarity of every document in the test data for a specific document in the test data using the trained doc2vec model. However, I am unable to determine how to do this.
I currently using model.docvecs.most_similar(...). However, this function only finds the similarity of every document in the training data for a specific document in the test data.
I have tried manually comparing the inferred vector of a specific document in the test data with the inferred vectors of every other document in the test data using model.docvecs.n_similarity(inferred_vector.tolist(), testvectors[i].tolist()) but this returns KeyError: "tag '-0.3502606451511383' not seen in training corpus/invalid" as there are vectors not in the dictionary.
|
It turns out there is a function called similarity_unseen_docs(...) which can be used to find the similarity of 2 documents in the test data.
However, I will leave the question unsolved for now as it is not very optimal since I would need manually compare the specific document with every other document in the test data. Also, it compares the words in the documents instead of the vectors which could affect accuracy.
| 0 | false | 2 | 6,075 |
2019-04-30 22:47:27.537
|
How to run a python program using sourcelair?
|
I'm trying to run a python program in the online IDE SourceLair. I've written a line of code that simply prints hello, but I am embarrassed to say I can't figure out how to RUN the program.
I have the console, web server, and terminal available on the IDE already pulled up. I just don't know how to start the program. I've tried it on Mac OSX and Chrome OS, and neither work.
I don't know if anyone has experience with this IDE, but I can hope. Thanks!!
|
Can I ask you why you are using SourceLair?
Well I just figured it out in about 2 mins....its the same as using any other editor for python.
All you have to do is to run it in the terminal. python (nameoffile).py
| 0.201295 | false | 1 | 6,076 |
2019-05-01 22:51:50.737
|
How to implement Proximal Policy Optimization (PPO) Algorithm for classical control problems?
|
I am trying to implement clipped PPO algorithm for classical control task like keeping room temperature, charge of battery, etc. within certain limits. So far I've seen the implementations in game environments only. My question is the game environments and classical control problems are different when it comes to the implementation of the clipped PPO algorithm? If they are, help and tips on how to implement the algorithm for my case are appreciated.
|
I'm answering your question from a general RL point of view, I don't think the particular algorithm (PPO) makes any difference in this question.
I think there is no fundamental differences, both can be seen as discrete control problems. In a game you observe the state, then choose an action and act according to it, and receive reward an the observation of the subsequent state.
Now if you take a simple control problem, instead of a game you probably have a simulation (or just a very simple dynamic model) that describes the behavior of your problem. For example the equations of motion for an inverted pendulum (another classical control problem). In some case you might directly interact with the real system, not a model of it, but this is rare as it can be really slow, and the typical sample complexities of RL algorithms make learning on a real (physical) system less practical.
Essentially you interact with the model of your problem just the same way as you do with a game: you observe a state, take an action and act, and observe the next state. The only difference is that while in games reward is usually pre-defined (some score or goal state), probably you need to define the reward function for your problem. But again, in many cases you also need to define rewards for games, so this is not a major difference either.
| 1.2 | true | 1 | 6,077 |
2019-05-03 22:13:05.803
|
Visualizing a frozen graph_def.pb
|
I am wondering how to go about visualization of my frozen graph def. I need it to figure out my tensorflow networks input and output nodes. I have already tried several methods to no avail, like the summarize graph tool. Does anyone have an answer for some things that I can try? I am open to clarifying questions, thanks in advance.
|
You can try to use TensorBoard. It is on the Tensorflow website...
| 0 | false | 1 | 6,078 |
2019-05-04 18:47:48.437
|
Python: how to create database and a GUI together?
|
I am new to Python and to train myself, I would like to use Python build a database that would store information about wine - bottle, date, rating etc. The idea is that:
I could use to database to add a new wine entries
I could use the database to browse wines I have previously entered
I could run some small analyses
The design of my Python I am thinking of is:
Design database with Python package sqlite3
Make a GUI built on top of the database with the package Tkinter, so that I can both enter new data and query the database if I want.
My question is: would your recommend this design and these packages? Is it possible to build a GUI on top of a database? I know StackOverflow is more for specific questions rather than "project design" questions so I would appreciate if anyone could point me to forums that discuss project design ideas.
Thanks.
|
If it's just for you, sure there is no problem with that stack.
If I were doing it, I would skip Tkinter, and build something using Flask (or Django.) Doing a web page as a GUI yields faster results, is less fiddly, and more applicable to the job market.
| 0 | false | 1 | 6,079 |
2019-05-05 00:30:24.233
|
Pandas read_csv method can't get 'œ' character properly while using encoding ISO 8859-15
|
I have some trubble reading with pandas a csv file which include the special character 'œ'.
I've done some reseach and it appears that this character has been added to the ISO 8859-15 encoding standard.
I've tried to specify this encoding standard to the pandas read_csv methods but it doesn't properly get this special character (I got instead a '☐') in the result dataframe :
df= pd.read_csv(my_csv_path, ";", header=None, encoding="ISO-8859-15")
Does someone know how could I get the right 'œ' character (or eaven better the string 'oe') instead of this ?
Thank's a lot :)
|
Anyone have a clue ? I've manage the problem by manually rewrite this special character before reading my csv with pandas but that doesn't answer my question :(
| 0 | false | 1 | 6,080 |
2019-05-06 11:57:11.053
|
Using OpenCV with PyPy
|
I am trying to run a python script using OpenCV with PyPy, but all the documentation that I found didn't work.
The installation of PyPy went well, but when I try to run the script it says that it can't find OpenCV modules like 'cv2' for example, despite having cloned opencv for pypy directly from a github repository.
I would need to know how to do it exactly.
|
pip install opencv-python worked well for me on python 2.7, I can import and use cv2.
| 0.201295 | false | 1 | 6,081 |
2019-05-07 06:21:51.733
|
Getting ARERR 149 A user name must be supplied in the control record
|
I have a SOAP url , while running the url through browser I am getting a wsdl response.But when I am trying to call a method in the response using the required parameter list, and it is showing "ARERR [149] A user name must be supplied in the control record".I tried using PHP as well as python but I am getting the same error.
I searched this error and got the information like this : "The name field of the ARControlStruct parameter is empty. Supply the name of an AR System user in this field.".But nowhere I saw how to supply the user name parameter.
|
I got the solution for this problem.Following are the steps I followed to solve the issue (I have used "zeep" a 3rd party module to solve this):
Run the following command to understand WSDL:
python -mzeep wsdl_url
Search for string "Service:". Below that we can see our operation name
For my operation I found following entry:
MyOperation(parameters..., _soapheaders={parameters: ns0:AuthenticationInfo})
which clearly communicates that, I have to pass parameters and an auth param using kwargs "_soapheaders"
With that I came to know that I have to pass my authentication element as _soapheaders argument to MyOperation function.
Created Auth Element:
auth_ele = client.get_element('ns0:AuthenticationInfo')
auth = auth_ele(userName='me', password='mypwd')
Passed the auth to my Operation:
cleint.service.MyOperation('parameters..', _soapheaders=[auth])
| 0 | false | 1 | 6,082 |
2019-05-07 20:45:21.080
|
How to implement Breadth-First-Search non-recursively for a directed graph on python
|
I'm trying to implement a BFS function that will print a list of nodes of a directed graph as visited using Breadth-First-Search traversal. The function has to be implemented non-recursively and it has to traverse through all the nodes in a graph, so if there are multiple trees it will print in the following way:
Tree 1: a, b
Tree 2: d, e, h
Tree 3: .....
My main difficulty is understanding how to make the BFS function traverse through all the nodes if the graph has several trees, without reprinting previously visited nodes.
|
BFS is usually done with a queue. When you process a node, you push its children onto the queue. After processing the node, you process the next one in the queue.
This is by nature non-recursive.
| 0 | false | 1 | 6,083 |
2019-05-08 08:51:35.840
|
How to kill tensorboard with Tensorflow2 (jupyter, Win)
|
sorry for the noob question, but how do I kill the Tensorflow PID?
It says:
Reusing TensorBoard on port 6006 (pid 5128), started 4 days, 18:03:12 ago. (Use '!kill 5128' to kill it.)
But I can not find any PID 5128 in the windows taks manager. Using '!kill 5128' within jupyter the error returns that comand kill cannot be found. Using it in the Windows cmd or conda cmd does not work either.
Thanks for your help.
|
If you clear the contents of AppData/Local/Temp/.tensorboard-info, and delete your logs, you should be able to have a fresh start
| 0.999967 | false | 1 | 6,084 |
2019-05-08 11:29:27.797
|
how to extract line from a word2vec file?
|
I have created a word2vec file and I want to extract only the line at position [0]
this is the word2vec file
`36 16
Activity 0.013954502 0.009596351 -0.0002082094 -0.029975398 -0.0244055 -0.001624907 0.01995442 0.0050479663 -0.011549354 -0.020344704 -0.0113901375 -0.010574887 0.02007604 -0.008582828 0.030914625 -0.009170294
DATABASED%GWC%5 0.022193532 0.011890317 -0.018219836 0.02621059 0.0029900416 0.01779779 -0.026217759 0.0070709535 -0.021979155 0.02609082 0.009237218 -0.0065825963 -0.019650755 0.024096865 -0.022521153 0.014374277
DATABASED%GWC%7 0.021235622 -0.00062567473 -0.0045315344 0.028400827 0.016763352 0.02893731 -0.013499333 -0.0037113864 -0.016281538 0.004078895 0.015604254 -0.029257657 0.026601797 0.013721668 0.016954066 -0.026421601`
|
glove_model["Activity"] should get you its vector representation from the loaded model. This is because glove_model is an object of type KeyedVectors and you can use key value to index into it.
| 1.2 | true | 1 | 6,085 |
2019-05-08 17:12:26.120
|
Handling many-to-many relationship from existing database using Django ORM
|
I'm starting to work with Django, already done some models, but always done that with 'code-first' approach, so Django handled the table creations etc. Right now I'm integrating an already existing database with ORM and I encountered some problems.
Database has a lot of many-to-many relationships so there are quite a few tables linking two other tables. I ran inspectdb command to let Django prepare some models for me. I revised them, it did rather good job guessing the fields and relations, but the thing is, I think I don't need those link tables in my models, because Django handles many-to-many relationships with ManyToManyField fields, but I want Django to use that link tables under the hood.
So my question is: Should I delete the models for link tables and add ManyToManyFields to corresponding models, or should I somehow use this models?
I don't want to somehow mess-up database structure, it's quite heavy populated.
I'm using Postgres 9.5, Django 2.2.
|
In many cases it doesn't matter. If you would like to keep the code minimal then m2m fields are a good way to go. If you don't control the database structure it might be worth keeping the inspectdb schema in case you have to do it again after schema changes that you don't control. If the m2m link tables can grow properties of their own then you need to keep them as models.
| 0 | false | 1 | 6,086 |
2019-05-08 20:10:18.047
|
Is there a way to use the "read_csv" method to read the csv files in order they are listed in a directory?
|
I am plotting plots on one figure using matplotlib from csv files however, I want the plots in order. I want to somehow use the read_csv method to read the csv files from a directory in the order they are listed in so that they are outputted in the same fashion.
I want the plots listed under each other the same way the csv files are listed in the directory.
|
you could use os.listdir() to get all the files in the folder and then sort them out in a certain way, for example by name(it would be enough using the python built in sorted() ). Instead if you want more fancy ordering you could retrieve both the name and last modified date and store them in a dictionary, order the keys and retrieve the values. So as @Fausto Morales said it all only depends on which order you would like them to be sorted.
| 1.2 | true | 1 | 6,087 |
2019-05-09 09:52:44.457
|
How to make a python script run forever online?
|
I Have a python script that monitors a website, and I want it to send me a notification when some particular change happens to the website.
My question is how can I make that Python script runs for ever in some place else (Not my machine, because I want it to send me a notification even when my machine is off)?
I have thought about RDP, but I wanted to have your opinions also.
(PS: FREE Service if it's possible, otherwise the lowest cost)
Thank you!
|
I would suggest you to setup AWS EC2 instance with whatever OS you want.
For beginner, you can get 750 hours of usage for free where you can run your script on.
| 0.386912 | false | 1 | 6,088 |
2019-05-12 11:06:47.317
|
How to write binary file with bit length not a multiple of 8 in Python?
|
I'm working on a tool generates dummy binary files for a project. We have a spec that describes the real binary files, which are created from a stream of values with various bit lengths. I use input and spec files to create a list of values, and the bitstring library's BitArray class to convert the values and join them together.
The problem is that the values' lengths don't always add up to full bytes, and I need the file to contain the bits as-is. Normally I could use BitArray.tofile(), but that method automatically pads the file with zeroes at the end.
Is there another way how to write the bits to a file?
|
You need to give padding to the, say 7-bit value so it matches a whole number of bytes:
1010101 (7 bits) --> 01010101
1111 (4 bits) --> 00001111
The padding of the most significant digits does not affect the data taken from the file.
| 0 | false | 1 | 6,089 |
2019-05-14 11:09:07.967
|
Send variable between PCs over the internet using python
|
I have two computers with internet connection. They both have public IPs and they are NATed. What I want is to send a variable from PC A to PC B and close the connection.
I have thought of two approaches for this:
1) Using sockets. PC B will have listen to a connection from PC A. Then, when the variable will be sent, the connection will be closed. The problem is that, the sockets will not communicate, because I have to forward the traffic from my public IP to PC B.
2) An out of the box idea, is to have the variable broadcasted online somewhere. I mean making a public IP hold the variable in HTML and then the PC would GET the IP from and get the variable. The problem is, how do I make that variable accessible over the internet?
Any ideas would be much appreciated.
|
Figured a solution out. I make a dummy server using flask and I hosted it at pythonanywhere.com for free. The variables are posted to the server from PC A and then, PC B uses the GET method to get them locally.
| 1.2 | true | 1 | 6,090 |
2019-05-15 19:48:44.263
|
Pulling duration stats API in Airflow
|
In airflow, the "Gantt" chart offers quite a good view on performance of the ran tasks. It offers stats like start/end time, duration and etc.
Do you guys know a way to programmatically pull these stats via the Airflow API? I would like to use these stats and generate periodic reports on the performance of my tasks and how it changes over time.
My airflow version is: 1.9
Python: 3.6.3
Running on top of docker
Thanks!
Kelvin
Airflow online documentation
|
One easy approach could be to set up a SQL alchemy connection, airflow stores/sends all the data in there once the configuration is completed(dag info/stat/fail, task info/stats/ etc.).
Edit airflow.cfg and add:
sql_alchemy_conn = mysql://------/table_name
| 1.2 | true | 1 | 6,091 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.