Q_CreationDate
stringlengths 23
23
| Title
stringlengths 11
149
| Question
stringlengths 25
6.53k
| Answer
stringlengths 15
5.1k
| Score
float64 -1
1.2
| Is_accepted
bool 2
classes | N_answers
int64 1
17
| Q_Id
int64 0
6.76k
|
---|---|---|---|---|---|---|---|
2018-09-27 09:20:03.150
|
Choosing best semantics for related variables in an untyped language like Python
|
Consider the following situation: you work with audio files and soon there are different contexts of what "an audio" actually is in same solution.
This on one side is more obvious through typing, though while Python has classes and typing, but it is less explicit in the code like in Java. I think this occurs in any untyped language.
My question is how to have less ambiguous variable names and whether there is something like an official and widely accepted guideline or even a standard like PEP/RFC for that or comparable.
Examples for variables:
A string type to designate the path/filename of the actual audio file
A file handle for the above to do the I/O
Then, in the package pydub, you deal with the type AudioSegment
While in the package moviepy, you deal with the type AudioFileClip
Using all the four together, requires in my eyes for a clever naming strategy, but maybe I just oversee something.
Maybe this is a quite exocic example, but if you think of any other media types, this should provide a more broad view angle. Likewise, is a Document a handle, a path or an abstract object?
|
There is no definitive standard/rfc to name your variables. One option is to prefix/suffix your variables with a (possibly short form) type. For example, you can name a variable as foo_Foo where variable foo_Foo is of type Foo.
| 0 | false | 1 | 5,731 |
2018-09-27 14:44:45.557
|
Holoviews - network graph - change edge color
|
I am using holoviews and bokeh with python 3 to create an interactive network graph fro mNetworkx. I can't manage to set the edge color to blank. It seems that the edge_color option does not exist. Do you have any idea how I could do that?
|
Problem solved, the option to change edges color is edge_line_color and not edge_color.
| 0.386912 | false | 1 | 5,732 |
2018-09-27 15:09:52.837
|
Make Pipenv create the virtualenv in the same folder
|
I want Pipenv to make virtual environment in the same folder with my project (Django).
I searched and found the PIPENV_VENV_IN_PROJECT option but I don't know where and how to use this.
|
For posterity's sake, if you find pipenv is not creating a virtual environment in the proper location, you may have an erroneous Pipfile somewhere, confusing the pipenv shell call - in which case I would delete it form path locations that are not explicitly linked to a repository.
| 0.16183 | false | 2 | 5,733 |
2018-09-27 15:09:52.837
|
Make Pipenv create the virtualenv in the same folder
|
I want Pipenv to make virtual environment in the same folder with my project (Django).
I searched and found the PIPENV_VENV_IN_PROJECT option but I don't know where and how to use this.
|
This maybe help someone else.. I find another easy way to solve this!
Just make empty folder inside your project and name it .venv
and pipenv will use this folder.
| 1 | false | 2 | 5,733 |
2018-09-27 15:26:56.820
|
Force tkinter listbox to highlight item when selected before task is started
|
I have a tkinter listbox, when I select a item it performs a few actions then returns the results, while that is happening the item I selected does not show as selected, is there a way to force it to show selected immediately so it's obvious to the user they selected the correct one while waiting on the returned results? I'm using python 3.4 and I'm on a windows 7 machine.
|
The item does show as selected right away because the time consuming actions are executed before updating the GUI. You can force the GUI to update before executing the actions by using window.update_idletasks().
| 0 | false | 1 | 5,734 |
2018-09-27 20:02:47.580
|
In Python DataFrame how to find out number of rows that have valid values of columns
|
I want to find the number of rows that have certain values such as None or "" or NaN (basically empty values) in all columns of a DataFrame object. How can I do this?
|
Use df.isnull().sum() to get number of rows with None and NaN value.
Use df.eq(value).sum() for any kind of values including empty string "".
| 0.265586 | false | 1 | 5,735 |
2018-09-28 10:00:04.037
|
.get + dict variable
|
I have a charge object with information in charge['metadata']['distinct_id']. There could be the case that it's not set, therefore I tried it that way which doesn't work charge.get(['metadata']['distinct_id'], None)
Do you know how to do that the right way?
|
You don't say what the error is, but, two things possibly wrong
it should be charge.get('metadata', None)
you can't directly do it on two consecutive levels. If the metadata key returns None, you can't go on and ask for the distinct_id key. You could return an empty dict and apply get to that, eg something like charge.get('metadata', {}).get('distinct_id', None)
| 1.2 | true | 2 | 5,736 |
2018-09-28 10:00:04.037
|
.get + dict variable
|
I have a charge object with information in charge['metadata']['distinct_id']. There could be the case that it's not set, therefore I tried it that way which doesn't work charge.get(['metadata']['distinct_id'], None)
Do you know how to do that the right way?
|
As @blue_note mentioned you could not user two consecutive levels. However your can try something like
charge.get('metadata', {}).get('distinct_id')
here, you tried to get 'metadata' from charge and if it does not found then it will consider blank dictionary and try to get 'distinct_id' from there (technically it does not exists). In this scenario, you need not to worry about if metadata exists or not. If it exists then it will check for distinct_id from metadata or else it throws None.
Hope this will solve your problem.
Cheers..!
| 0.135221 | false | 2 | 5,736 |
2018-09-28 16:43:57.900
|
PyMongo how to get the last item in the collection?
|
In the MongoDB console, I know that you can use $ last and $ natural. In PyMongo, I could not use it, maybe I was doing something wrong?
|
Another way is:
db.collection.find().limit(1).sort([('$natural',-1)])
This seemed to work best for me.
| 0.201295 | false | 1 | 5,737 |
2018-09-29 12:08:21.843
|
how can I use Transfer Learning for LSTM?
|
I intent to implement image captioning. Would it be possible to transfer learning for LSTM? I have used pretrained VGG16(transfer learning) to Extract features as input of the LSTM.
|
As I have discovered, we can't use Transfer learning on the LSTM weights. I think the causation is infra-structure of LSTM networks.
| 1.2 | true | 1 | 5,738 |
2018-09-29 19:52:13.553
|
Is there any way to retrieve file name using Python?
|
In a Linux directory, I have several numbered files, such as "day1" and "day2". My goal is to write a code that retrieves the number from the files and add 1 to the file that has the biggest number and create a new file. So, for example, if there are files, 'day1', 'day2' and 'day3', the code should read the list of files and add 'day4'. To do so, at least I need to know how to retrieve the numbers on the file name.
|
Get all files with the os module/package (don't have the exact command handy) and then use regex(package) to get the numbers. If you don't want to look into regex you could remove the letters from your string with replace() and convert that string with int().
| 0 | false | 1 | 5,739 |
2018-09-30 05:33:24.990
|
python 3, how print function changes output?
|
The following were what I did in python shell. Can anyone explain the difference?
datetime.datetime.now()
datetime.datetime(2018, 9, 29, 21, 34, 10, 847635)
print(datetime.datetime.now())
2018-09-29 21:34:26.900063
|
The first is the result of calling repr on the datetime value, the second is the result of calling str on a datetime.
The Python shell calls repr on values other than None before printing them, while print tries str before calling repr (if str fails).
This is not dependent on the Python version.
| 1.2 | true | 1 | 5,740 |
2018-09-30 17:25:43.813
|
Python's cmd.Cmd case insensitive commands
|
I am using python's CLI module which takes any do_* method and sets it as a command, so a do_show() method will be executed if the user type "show".
How can I execute the do_show() method using any variation of capitalization from user input e.g. SHOW, Show, sHoW and so on without giving a Command Not Found error?
I think the answer would be something to do with overriding the Cmd class and forcing it to take the user's input.lower() but idk how to do that :/
|
You should override onecmd to achieve desired functionality.
| 1.2 | true | 1 | 5,741 |
2018-10-01 07:38:38.010
|
Possible ways to embed python matplotlib into my presentation interactively
|
I need to present my data in various graphs. Usually what I do is to take a screenshot of my graph (I almost exclusively make them with matplotlib) and paste it into my PowerPoint.
Unfortunately my direct superior seems not to be happy with the way I present them. Sometimes he wants certain things in log scale and sometimes he dislike my color palette. The data is all there, but because its an image there's no way I can change that in the meeting.
My superior seems to really care about those things and spend quite a lot of time telling me how to make plots in every single meeting. He (usually) will not comment on my data before I make a plot the way he wants.
That's where my question becomes relevant. Right now what I have in my mind is to have an interactive canvas embedded in my PowerPoint such that I can change the range of the axis, color of my data point, etc in real time. I have been searching online for such a thing but it comes out empty. I wonder if that could be done and how can it be done?
For some simple graphs Excel plot may work, but usually I have to present things in 1D or 2D histograms/density plots with millions of entries. Sometimes I have to fit points with complicated mathematical formulas and that's something Excel is incapable of doing and I must use scipy and pandas.
The closest thing to this I found online is rise with jupyter which convert a jupyter notebook into a slide show. I think that is a good start which allows me to run python code in real time inside the presentation, but I would like to use PowerPoint related solutions if possible, mostly because I am familiar with how PowerPoint works and I still find certain PowerPoint features useful.
Thank you for all your help. While I do prefer PowerPoint, any other products that allows me to modify plots in my presentation in real time or alternatives of rise are welcomed.
|
When putting a picture in PowerPoint you can decide whether you want to embed it or link to it. If you decide to link to the picture, you would be free to change it outside of powerpoint. This opens up the possibility for the following workflow:
Next to your presentation you have a Python IDE or Juypter notebook open with the scripts that generate the figures. They all have a savefig command in them to save to exactly the location on disc from where you link the images in PowerPoint. If you need to change the figure, you make the changes in the python code, run the script (or cell) and switch back to PowerPoint where the newly created image is updated.
Note that I would not recommend putting too much effort into finding a better solution to this, but rather spend the time thinking about good visual reprentations of the data, due to the following reasons: 1. If your instrutor's demands are completely unreasonable ("I like blue better than green, so you need to use blue.") than it's not worth spending effort into satisfying their demands at all. 2. If your instrutor's demands are based on the fact that the current reprentation does not allow to interprete the data correctly, this can be prevented by spending more thoughts on good plots prior to the presentation. This is a learning process, which I guess your instructor wants you to internalize. After all, you won't get a degree in computer science for writing a PowerPoint backend to matplotlib, but rather for being able to present your research in a way suited for your subject.
| 1.2 | true | 1 | 5,742 |
2018-10-01 18:01:04.283
|
"No package 'coinhsl' found": IPOPT compiles and passes test, but pyomo cannot find it?
|
I don't know if the problem is between me and Pyomo.DAE or between me and IPOPT. I am doing this all from the command-line interface in Bash on Ubuntu on Windows (WSL). When I run:
JAMPchip@DESKTOP-BOB968S:~/examples/dae$ python3 run_disease.py
I receive the following output:
WARNING: Could not locate the 'ipopt' executable, which is required
for solver
ipopt Traceback (most recent call last): File "run_disease.py", line 15, in
results = solver.solve(instance,tee=True) File "/usr/lib/python3.6/site-packages/pyomo/opt/base/solvers.py", line
541, in solve
self.available(exception_flag=True) File "/usr/lib/python3.6/site-packages/pyomo/opt/solver/shellcmd.py", line
122, in available
raise ApplicationError(msg % self.name) pyutilib.common._exceptions.ApplicationError: No executable found for
solver 'ipopt'
When I run "make test" in the IPOPT build folder, I reecieved:
Testing AMPL Solver Executable...
Test passed! Testing C++ Example...
Test passed! Testing C Example...
Test passed! Testing Fortran Example...
Test passed!
But my one major concern is that in the "configure" output was the follwing:
checking for COIN-OR package HSL... not given: No package 'coinhsl'
found
There were also a few warning when I ran "make". I am not at all sure where the issue lies. How do I make python3 find IPOPT, and how do I tell if I have IPOPT on the system for pyomo.dae to find? I am pretty confident that I have "coibhsl" in the HSL folder, so how do I make sure that it is found by IPOPT?
|
As sascha states, you need to make sure that the directory containing your IPOPT executable (likely the build folder) is in your system PATH. That way, if you were to open a terminal and call ipopt from an arbitrary directory, it would be detected as a valid command. This is distinct from being able to call make test from within the IPOPT build folder.
| 0 | false | 1 | 5,743 |
2018-10-02 13:17:45.260
|
how to disable printscreen key on windows using python
|
Is there any way to disable the print screen key when running a python application?
Maybe editing the windows registry is the way?
Thanks!
|
printscreen is OS Functionality.
Their is No ASCII code for PrintScreen.
Even their are many ways to take PrintScreen.
Thus, You can Disable keyboard but its difficult to stop user from taking PrintScreen.
| 0 | false | 1 | 5,744 |
2018-10-04 09:28:55.060
|
How does scrapy behave when enough resources are not available
|
I am running multiple scrapers using the command line which is an automated process.
Python : 2.7.12
Scrapy : 1.4.0
OS : Ubuntu 16.04.4 LTS
I want to know how scrapy handles the case when
There is not enough memory/cpu bandwidth to start a scraper.
There is not enough memory/cpu bandwidth during a scraper run.
I have gone through the documentation but couldn't find anything.
Anyone answering this, you don't have to know the right answer, if you can point me to the general direction of any resource you know which would be helpful, that would also be appreciated
|
The operating system kills any process that tries to access more memory than the limit.
Applies to python programs too. and scrapy is no different.
More often than not, bandwidth is the bottleneck in scraping / crawling applications.
Memory would only be a bottleneck if there is a serious memory leak in your application.
Your application would just be very slow if CPU is being shared by many process on the same machine.
| 1.2 | true | 1 | 5,745 |
2018-10-04 17:55:05.990
|
how to change raspberry pi ip in flask web service
|
I have a raspberry pi 3b+ and i'm showing ip camera stream using the Opencv in python.
My default ip in rasppberry is 169.254.210.x range and I have to put the camera in the same range.
How can i change my raspberry ip?
Suppose if I run the program on a web service such as a flask, can i change the raspberry pi server ip every time?
|
You can statically change your ip of raspberry pi by editing /etc/network/interfaces
Try editing a line of the file which contains address.
| 0 | false | 1 | 5,746 |
2018-10-04 19:48:49.993
|
"No module named 'docx'" error but "requirement already satisfied" when I try to install
|
From what I've read, it sounds like the issue might be that the module isn't in the same directory as my script. Is that the case? If so, how do I find the module and move it to the correct location?
Edit
In case it's relevant - I installed docx using easy_install, not pip.
|
pip show docx
This will show you where it is installed. However, if you're using python3 then
pip install python-docx
might be the one you need.
| 0 | false | 2 | 5,747 |
2018-10-04 19:48:49.993
|
"No module named 'docx'" error but "requirement already satisfied" when I try to install
|
From what I've read, it sounds like the issue might be that the module isn't in the same directory as my script. Is that the case? If so, how do I find the module and move it to the correct location?
Edit
In case it's relevant - I installed docx using easy_install, not pip.
|
Please install python-docx.
Then you import docx (not python-docx)
| 0 | false | 2 | 5,747 |
2018-10-05 16:36:21.090
|
How can I see what packages were installed using `sudo pip install`?
|
I know that installing python packages using sudo pip install is bad a security risk. Unfortunately, I found this out after installing quite a few packages using sudo.
Is there a way to find out what python packages I installed using sudo pip install? The end goal being uninstallment and correctly re-installing them within a virtual environment.
I tried pip list to get information about the packages, but it only gave me their version. pip show <package name> gave me more information about an individual package such as where it is installed, but I don't know how to make use of that information.
|
any modules you installed with sudo will be owned by root, so you can open your shell/terminal, cd to site-packages directory & check the directories owner with ls -la, then any that has root in the owner column is the one you want to uninstall.
| 1.2 | true | 2 | 5,748 |
2018-10-05 16:36:21.090
|
How can I see what packages were installed using `sudo pip install`?
|
I know that installing python packages using sudo pip install is bad a security risk. Unfortunately, I found this out after installing quite a few packages using sudo.
Is there a way to find out what python packages I installed using sudo pip install? The end goal being uninstallment and correctly re-installing them within a virtual environment.
I tried pip list to get information about the packages, but it only gave me their version. pip show <package name> gave me more information about an individual package such as where it is installed, but I don't know how to make use of that information.
|
try the following command: pip freeze
| 0 | false | 2 | 5,748 |
2018-10-06 20:14:32.777
|
Is it possible to change the loss function dynamically during training?
|
I am working on a machine learning project and I am wondering whether it is possible to change the loss function while the network is training. I'm not sure how to do it exactly in code.
For example, start training with cross entropy loss and then halfway through training, switch to 0-1 loss.
|
You have to implement your own algorithm. This is mostly possible with Tensorflow.
| 0 | false | 1 | 5,749 |
2018-10-08 17:02:32.603
|
Keras LSTM Input Dimension understanding each other
|
but I have been trying to play around with it for awhile. I've seen a lot of guides on how Keras is used to build LSTM models and how people feed in the inputs and get expected outputs. But what I have never seen yet is, for example stock data, how we can make the LSTM model understand patterns between different dimensions, say close price is much higher than normal because volume is low.
Point of this is that I want to do a test with stock prediction, but make it so that each dimensions are not reliant on previous time steps, but also reliant on other dimensions it haves as well.
Sorry if I am not asking the question correctly, please ask more questions if I am not explaining it clearly.
|
First: Regressors will replicate if you input a feature that gives some direct intuition about the predicted input might be to secure the error is minimized, rather than trying to actually predict it. Try to focus on binary classification or multiclass classification, whether the closing price go up/down or how much.
Second: Always engineer the raw features to give more explicit patterns to the ML algorithm. Think on inputs as Volume(t) - Volume(t-1), close(t)^2 - close(t-1)^2, technical indicators(RSI, CCI, OBV etc.) Create your own features. You can use the pyti library for technical indicators.
| 0 | false | 1 | 5,750 |
2018-10-09 06:31:10.137
|
SoftLayer API: order a 128 subnet
|
We are trying to order a 128 subnet. But looks like it doesn't work, get an error saying Invalid combination specified for ordering a subnet. The same code works to order a 64 subnet. Any thoughts how to order a 128 subnet?
network_mgr = SoftLayer.managers.network.NetworkManager(client)
network_mgr.add_subnet(‘private’, 128, vlan_id, test_order=True)
Traceback (most recent call last):
File "subnet.py", line 11, in <module>
result = nwmgr.add_subnet('private', 128, vlan_id, test_order=True)
File "/usr/local/lib/python2.7/site-packages/SoftLayer/managers/network.py", line 154, in add_subnet
raise TypeError('Invalid combination specified for ordering a'
TypeError: Invalid combination specified for ordering a subnet.
|
Currently it seems not possible to add 128 ip subnet into the order, the package used by the manager to order subnets only allows to add subnets for: 64,32,16,8,4 (capacity),
It is because the package that does not contain any item that has 128 ip addresses subnet, this is the reason why you are getting the error Exception you provided.
You may also verify this through the Portal UI, if you can see 128 ip address option through UI in your account, please update this forum with a screenshot.
| 0 | false | 1 | 5,751 |
2018-10-09 10:19:24.127
|
Add Python to the Windows path
|
If I forget to add the Python to the path while installing it, how can I add it to my Windows path?
Without adding it to the path I am unable to use it. Also if I want to put python 3 as default.
|
Edit Path in Environment Variables
Add Python's path to the end of the list (these are separated by ';').
For example:
C:\Users\AppData\Local\Programs\Python\Python36;
C:\Users\AppData\Local\Programs\Python\Python36\Scripts
and if you want to make it default
you have to edit the system environmental variables
edit the following from the Path
C:\Windows;C:\Windows\System32;C:\Python27
Now Python 3 would have been become the default python in your system
You can check it by python --version
| 0.386912 | false | 1 | 5,752 |
2018-10-09 11:15:40.860
|
Deploying python with docker, images too big
|
We've built a large python repo that uses lots of libraries (numpy, scipy, tensor flow, ...) And have managed these dependencies through a conda environment. Basically we have lots of developers contributing and anytime someone needs a new library for something they are working on they 'conda install' it.
Fast forward to today and now we need to deploy some applications that use our repo. We are deploying using docker, but are finding that these images are really large and causing some issues, e.g. 10+ GB. However each individual application only uses a subset of all the dependencies in the environment.yml.
Is there some easy strategy for dealing with this problem? In a sense I need to know the dependencies for each application, but I'm not sure how to do this in an automated way.
Any help here would be great. I'm new to this whole AWS, Docker, and python deployment thing... We're really a bunch of engineers and scientists who need to scale up our software. We have something that works, it just seems like there has to be a better way .
|
First see if there are easy wins to shrink the image, like using Alpine Linux and being very careful about what gets installed with the OS package manager, and ensuring you only allow installing dependencies or recommended items when truly required, and that you clean up and delete artifacts like package lists, big things you may not need like Java, etc.
The base Anaconda/Ubuntu image is ~ 3.5GB in size, so it's not crazy that with a lot of extra installations of heavy third-party packages, you could get up to 10GB. In production image processing applications, I routinely worked with Docker images in the range of 3GB to 6GB, and those sizes were after we had heavily optimized the container.
To your question about splitting dependencies, you should provide each different application with its own package definition, basically a setup.py script and some other details, including dependencies listed in some mix of requirements.txt for pip and/or environment.yaml for conda.
If you have Project A in some folder / repo and Project B in another, you want people to easily be able to do something like pip install <GitHub URL to a version tag of Project A> or conda env create -f ProjectB_environment.yml or something, and voila, that application is installed.
Then when you deploy a specific application, have some CI tool like Jenkins build the container for that application using a FROM line to start from your thin Alpine / whatever container, and only perform conda install or pip install for the dependency file for that project, and not all the others.
This also has the benefit that multiple different projects can declare different version dependencies even among the same set of libraries. Maybe Project A is ready to upgrade to the latest and greatest pandas version, but Project B needs some refactoring before the team wants to test that upgrade. This way, when CI builds the container for Project B, it will have a Python dependency file with one set of versions, while in Project A's folder or repo of source code, it might have something different.
| 1.2 | true | 1 | 5,753 |
2018-10-09 15:27:34.223
|
Text classification by pattern
|
Could you recomend me best way how to do it: i have a list phrases, for example ["free flower delivery","flower delivery Moscow","color + home delivery","flower delivery + delivery","order flowers + with delivery","color delivery"] and pattern - "flower delivery". I need to get list with phrases as close as possible to pattern.
Could you give some advice to how to do it?
|
Answer given by nflacco is correct.. In addition to that, have you tried edit distance? Try fuzzywuzzy (pip install fuzzywuzzy).. it uses Edit distance to give you a score, how near two sentences are
| 0.201295 | false | 1 | 5,754 |
2018-10-10 10:39:12.207
|
TensorFlow: Correct way of using steps in Stochastic Gradient Descent
|
I am currently using TensorFlow tutorial's first_steps_with_tensor_flow.ipynb notebook to learn TF for implementing ML models. In the notebook, they have used Stochastic Gradient Descent (SGD) to optimize the loss function. Below is the snippet of the my_input_function:
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
Here, it can be seen that the batch_size is 1. The notebook uses a housing data set containing 17000 labeled examples for training. This means for SGD, I will be having 17000 batches.
LRmodel = linear_regressor.train(input_fn = lambda:my_input_fn(my_feature,
targets), steps=100)
I have three questions -
Why is steps=100 in linear_regressor.train method above? Since we have 17000 batches and steps in ML means the count for evaluating one batch, in linear_regressor.train method steps = 17000 should be initialized, right?
Is number of batches equal to the number of steps/iterations in ML?
With my 17000 examples, if I keep my batch_size=100, steps=500, and num_epochs=5, what does this initialization mean and how does it correlate to 170 batches?
|
step is the literal meaning: means you refresh the parameters in your batch size; so for linear_regessor.train, it will train 100 times for this batch_size 1.
epoch means to refresh the whole data, which is 17,000 in your set.
| -0.386912 | false | 1 | 5,755 |
2018-10-11 15:24:17.267
|
Writing unit tests in Python
|
I have a task in which i have a csv file having some sample data. The task is to convert the data inside the csv file into other formats like JSON, HTML, YAML etc after applying some data validation rules.
Now i am also supposed to write some unit tests for this in pytest or the unittest module in Python.
My question is how do i actually write the unit tests for this since i am converting them to different JSON/HTML files ? Should i prepare some sample files and then do a comparison with them in my unit tests.
I think only the data validation part in the task can be tested using unittest and not the creation of files in different formats right ?
Any ideas would be immensely helpful.
Thanks in advance.
|
You should do functional tests, so testing the whole pipeline from a csv file to the end result, but unit tests is about checking that individual steps work.
So for instance, can you read a csv file properly? Does it fail as expected when you don't provide a csv file? Are you able to check each validation unit? Are they failing when they should? Are they passing valid data?
And of course, the result must be tested as well. Starting from a known internal representation, is the resulting json valid? Does it contain all the required data? Same for yaml, HTML. You should not test the formatting, but really what was output and if it's correct.
You should always test that valid data passes and that incorrect doesn't at each step of your work flow.
| 1.2 | true | 1 | 5,756 |
2018-10-12 12:28:16.983
|
How to get filtered rowCount in a QSortFilterProxyModel
|
I use a QSortFilterProxyModel to filter a QSqlTableModel's data, and want to get the filtered rowCount.
But when I call the QSortFilterProxyModel.rowCount method, the QSqlTableModel's rowCount was returned.
So how can I get the filtered rowcount?
|
You should after set QSortFilterProxyModel filter to call proxymodel.rowCount。
| 0 | false | 1 | 5,757 |
2018-10-13 10:45:57.270
|
python 3.7 setting environment variable path
|
I installed Anaconda 3 and wanted to execute python from the shell. It returned that it's either written wrong or does not exist. Apparently, I have to add a path to the environmentle variable.
Can someone tell how to do this?
Environment: Windows 10, 64 bit and python 3.7
Ps: I know the web is full with that but I am notoriously afraid to make a mistake. And I did not find an exact entry for my environment. Thanks in advance.
Best Daniel
|
Windows:
search for -->Edit the system environment variables
In Advanced tab, click Environment variabless
In System variables, Select PATH and click edit. Now Click new, ADD YOU PATH.
Click Apply and close.
Now, check in command prompt
| 1.2 | true | 1 | 5,758 |
2018-10-14 02:29:15.473
|
Given two lists of ints, how can we find the closes number in one list from the other one?
|
Given I have two different lists with ints.
a = [1, 4, 11, 20, 25] and b = [3, 10, 20]
I want to return a list of length len(b) that stores the closest number in a for each ints in b.
So, this should return [4, 11, 20].
I can do this in brute force, but what is a more efficient way to do this?
EDIT: It would be great if I can do this with standard library, if needed, only.
|
Use binary search, assuming the lists are in order.
The brute force in this case is only O(n), so I wouldn't worry about it, just use brute force.
EDIT:
yeh it is O(len(a)*len(b)) (roughly O(n^2)
sorry stupid mistake.
Since these aren't necessarily sorted the fastest is still O(len(a)*len(b)) though. Sorting the lists (using timsort) would take O(nlogn), then binary search O(logn), which results in O(nlog^2n)*O(n)=O(n^2log^2n), which is slower then just O(n^2).
| 0 | false | 1 | 5,759 |
2018-10-14 18:17:29.503
|
Python tasks and DAGs with different conda environments
|
Say that most of my DAGs and tasks in AirFlow are supposed to run Python code on the same machine as the AirFlow server.
Can I have different DAGs use different conda environments? If so, how should I do it? For example, can I use the Python Operator for that? Or would that restrict me to using the same conda environment that I used to install AirFlow.
More generally, where/how should I ideally activate the desired conda environment for each DAG or task?
|
The Python that is running the Airflow Worker code, is the one whose environment will be used to execute the code.
What you can do is have separate named queues for separate execution environments for different workers, so that only a specific machine or group of machines will execute a certain DAG.
| 1.2 | true | 1 | 5,760 |
2018-10-14 18:54:30.970
|
Is it possible to make my own encryption when sending data through sockets?
|
For example in python if I’m sending data through sockets could I make my own encryption algorithm to encrypt that data? Would it be unbreakable since only I know how it works?
|
Yes you can. Would it be unbreakable? No. This is called security through obscurity. You're relying on the fact that nobody knows how it works. But can you really rely on that?
Someone is going to receive the data, and they'll have to decrypt it. The code must run on their machine for that to happen. If they have the code, they know how it works. Well, at least anyone with a lot of spare time and nothing else to do can easily reverse engineer it, and there goes your obscurity.
Is it feasable to make your own algorithm? Sure. A bit of XOR here, a bit of shuffling there... eventually you'll have an encryption algorithm. It probably wouldn't be a good one but it would do the job, at least until someone tries to break it, then it probably wouldn't last a day.
Does Python care? Do sockets care? No. You can do whatever you want with the data. It's just bits after all, what they mean is up to you.
Are you a cryptographer? No, otherwise you wouldn't be here asking this. So should you do it? No.
| 1.2 | true | 1 | 5,761 |
2018-10-14 19:10:42.147
|
imshow() with desired framerate with opencv
|
Is there any workaround how to use cv2.imshow() with a specific framerate? Im capturing the video via VideoCapture and doing some easy postprocessing on them (both in a separeted thread, so it loads all frames in Queue and the main thread isn't slowed by the computation). I tryed to fix the framerate by calculating the time used for "reading" the image from the queue and then substract that value from number of miliseconds avalible for one frame:
if I have as input video with 50FPS and i want to playback it in real-time i do 1000/50 => 20ms per frame.
And then wait that time using cv2.WaitKey()
But still I get some laggy output. Which is slower then the source video
|
I don't believe there is such a function in opencv but maybe you could improve your method by adding a dynamic wait time using timers? timeit.default_timer()
calculate the time taken to process and subtract that from the expected framerate and maybe add a few ms buffer.
eg cv2.WaitKey((1000/50) - (time processing finished - time read started) - 10)
or you could have a more rigid timing eg script start time + frame# * 20ms - time processing finished
I haven't tried this personally so im not sure if it will actually work, also might be worth having a check so the number isnt below 1
| 1.2 | true | 1 | 5,762 |
2018-10-16 21:43:21.673
|
Azure Machine Learning Studio execute python script, Theano unable to execute optimized C-implementations (for both CPU and GPU)
|
I am execute a python script in Azure machine learning studio. I am including other python scripts and python library, Theano. I can see the Theano get loaded and I got the proper result after script executed. But I saw the error message:
WARNING (theano.configdefaults): g++ not detected ! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string.
Did anyone know how to solve this problem? Thanks!
|
I don't think you can fix that - the Python script environment in Azure ML Studio is rather locked down, you can't really configure it (except for choosing from a small selection of Anaconda/Python versions).
You might be better off using the new Azure ML service, which allows you considerably more configuration options (including using GPUs and the like).
| 1.2 | true | 1 | 5,763 |
2018-10-17 14:07:06.357
|
how to use pip install a package if there are two same version of python on windows
|
I have two same versions of python on windows. Both are 3.6.4. I installed one of them, and the other one comes with Anaconda.
My question is how do I use pip to install a package for one of them? It looks like the common method will not work since the two python versions are the same.
|
Use virtualenv, conda environment or pipenv, it will help with managing packages for different projects.
| 0 | false | 2 | 5,764 |
2018-10-17 14:07:06.357
|
how to use pip install a package if there are two same version of python on windows
|
I have two same versions of python on windows. Both are 3.6.4. I installed one of them, and the other one comes with Anaconda.
My question is how do I use pip to install a package for one of them? It looks like the common method will not work since the two python versions are the same.
|
pip points to only one installation because pip is a script from one python.
If you have one Python in your PATH, then it's that python and that pip that will be used.
| 0.201295 | false | 2 | 5,764 |
2018-10-18 03:14:15.287
|
How can i make computer read a python file instead of py?
|
I have a problem with installing numpy with python 3.6 and i have windows 10 64 bit
Python 3.6.6
But when i typed python on cmd this appears
Python is not recognized as an internal or external command
I typed py it solves problem but how can i install numpy
I tried to type commant set path =c:/python36
And copy paste the actual path on cmd but it isnt work
I tried also to edit the enviromnent path through type a ; and c:/python 36 and restart but it isnt help this
I used pip install nupy and download pip but it isnt work
|
On Windows, the py command should be able to launch any Python version you have installed. Each Python installation has its own pip. To be sure you get the right one, use py -3.6 -m pip instead of just pip.
You can use where pip and where pip3 to see which Python's pip they mean. Windows just finds the first one on your path.
If you activate a virtualenv, then you you should get the right one for the virtualenv while the virtualenv is active.
| 0 | false | 2 | 5,765 |
2018-10-18 03:14:15.287
|
How can i make computer read a python file instead of py?
|
I have a problem with installing numpy with python 3.6 and i have windows 10 64 bit
Python 3.6.6
But when i typed python on cmd this appears
Python is not recognized as an internal or external command
I typed py it solves problem but how can i install numpy
I tried to type commant set path =c:/python36
And copy paste the actual path on cmd but it isnt work
I tried also to edit the enviromnent path through type a ; and c:/python 36 and restart but it isnt help this
I used pip install nupy and download pip but it isnt work
|
Try pip3 install numpy. To install python 3 packages you should use pip3
| 0 | false | 2 | 5,765 |
2018-10-18 09:53:46.373
|
Is it possible to manipulate data from csv without the need for producing a new csv file?
|
I know how to import and manipulate data from csv, but I always need to save to xlsx or so to see the changes. Is there a way to see 'live changes' as if I am already using Excel?
PS using pandas
Thanks!
|
This is not possible using pandas. This lib creates copy of your .csv / .xls file and stores it in RAM. So all changes are applied to file stored in you memory not on disk.
| 1.2 | true | 1 | 5,766 |
2018-10-19 09:04:38.457
|
how to remove zeros after decimal from string remove all zero after dot
|
I have data frame with a object column lets say col1, which has values likes:
1.00,
1,
0.50,
1.54
I want to have the output like the below:
1,
1,
0.5,
1.54
basically, remove zeros after decimal values if it does not have any digit after zero. Please note that i need answer for dataframe. pd.set_option and round don't work for me.
|
A quick-and-dirty solution is to use "%g" % value, which will convert floats 1.5 to 1.5 but 1.0 to 1 and so on. The negative side-effect is that large numbers will be represented in scientific notation like 4.44e+07.
| 0 | false | 1 | 5,767 |
2018-10-19 10:34:41.947
|
Call Python functions from C# in Visual Studio Python support VS 2017
|
This is related to new features Visual Studio has introduced - Python support, Machine Learning projects to support.
I have installed support and found that I can create a python project and can run it. However, I could not find how to call a python function from another C# file.
Example, I created a classifier.py from given project samples, Now I want to run the classifier and get results from another C# class.
If there is no such portability, then how is it different from creating a C# Process class object and running the Python.exe with our py file as a parameter.
|
As per the comments, python support has come in visual studio. Visual studio is supporting running python scripts and debugging.
However, calling one python function from c# function and vice versa is not supported yet.
Closing the thread. Thanks for suggestions.
| 1.2 | true | 1 | 5,768 |
2018-10-19 10:55:44.210
|
Running Jenkinsfile with multiple Python versions
|
I have a multibranch pipeline set up in Jenkins that runs a Jenkinsfile, which uses pytest for testing scripts, and outputs the results using Cobertura plug-in and checks code quality with Pylint and Warnings plug-in.
I would like to test the code with Python 2 and Python 3 using virtualenv, but I do not know how to perform this in the Jenkinsfile, and Shining Panda plug-in will not work for multibranch pipelines (as far as I know). Any help would be appreciated.
|
You can do it even using vanilla Jenkins (without any plugins). 'Biggest' problem will be with proper parametrization. But let's start from the beginning.
2 versions of Python
When you install 2 versions of python on a single machine you will have 2 different exec files. For python2 you will have python and for python3 you will have python3. Even when you create virtualenv (use venv) you will have both of them. So you are able to run unittests agains both versions of python. It's just a matter of executing proper command from batch/bash script.
Jenkins
There are many ways of performing it:
you can prepare separate jobs for both python 2 and 3 versions of tests and run them from jenkins file
you can define the whole pipeline in a single jenkins file where each python test is a different stage (they can be run one after another or concurrently)
| 0.386912 | false | 1 | 5,769 |
2018-10-20 01:58:46.053
|
How to find redundant paths (subpaths) in the trajectory of a moving object?
|
I need to track a moving deformable object in a video (but only 2D space). How do I find the paths (subpaths) revisited by the object in the span of its whole trajectory? For instance, if the object traced a path, p0-p1-p2-...-p10, I want to find the number of cases the object traced either p0-...-p10 or a sub-path like p3-p4-p5. Here, p0,p1,...,p10 represent object positions (in (x,y) pixel coordinates at the respective instants). Also, how do I know at which frame(s) these paths (subpaths) are being revisited?
|
I would first create a detection procedure that outputs a list of points visited along with their video frame number. Then use list exploration functions to know how many redundant suites are found and where.
As you see I don't write your code. If you need anymore advise please ask!
| 0 | false | 1 | 5,770 |
2018-10-20 13:20:10.283
|
Python - How to run script continuously to look for files in Windows directory
|
I got a requirement to parse the message files in .txt format real time as and when they arrive in incoming windows directory. The directory is in my local Windows Virtual Machine something like D:/MessageFiles/
I wrote a Python script to parse the message files because it's a fixed width file and it parses all the files in the directory and generates the output. Once the files are successfully parsed, it will be moved to archive directory. Now, i would like to make this script run continuously so that it looks for the incoming message files in the directory D:/MessageFiles/ and perform the processing as and when it sees the new files in the path.
Can someone please let me know how to do this?
|
There are a few ways to do this, it depends on how fast you need it to archive the files.
If the frequency is low, for example every hour, you can try to use windows task scheduler to run the python script.
If we are talking high frequency, or you really want a python script running 24/7, you could put it in a while loop and at the end of the loop do time.sleep()
If you go with this, I would recommend not blindly parsing the entire directory on every run, but instead finding a way to check whether new files have been added to the directory (such as the amount of files perhaps, or the total size). And then if there is a fluctuation you can archive.
| 1.2 | true | 1 | 5,771 |
2018-10-20 15:04:32.733
|
PyOpenGL camera system
|
I'm confused on how the PyOpenGL camera works or how to implement it. Am I meant to rotate and move the whole world around the camera or is there a different way?
I couldn't find anything that can help me and I don't know how to translate C to python.
I just need a way to transform the camera that can help me understand how it works.
|
To say it bluntly: There is no such thing as a "camera" in OpenGL (neither there is in DirectX, or Vulkan, or in any of the legacy 3D graphics APIs). The effects of a camera is understood as some parameter that contributes to the ultimate placement of geometry inside the viewport volume.
The sooner you understand that all that current GPUs do is offering massively accelerated computational resources to set the values of pixels in a 2D grid, where the region of the pixels changed are mere points, lines or triangles on a 2D plane onto which they are projected from an arbitrarily dimensioned, abstract space, the better.
You're not even moving around the world around the camera. Setting up transformations is actually errecting the stage in which "the world" will appear in the first place. Any notion of a "camera" is an abstraction created by a higher level framework, like a third party 3D engine or your own creation.
So instead of thinking in terms of a camera, which constrains your thinking, you should think about it this way:
What kind of transformations do I have to chain up, to give a tuple of numbers that are called "position" an actual meaning, by letting this position turn up at a certain place on the visible screen?
You really ought to think that way, because that is what's actually happening.
| 1.2 | true | 1 | 5,772 |
2018-10-21 13:11:30.197
|
Anaconda Installation on Azure Web App Services
|
I install my python modules via pip for my Azure Web Apps. But some of python libraries that I need are only available in conda. I have been trying to install anaconda on Azure Web Apps (windows/linux), no success so far. Any suggestions/examples on how to use conda env on azure web apps?
|
Currently, Azure App Service only supports the official Python to be installed as extensions. Instead of using the normal App Service, I would suggest you to use a Webapp for Container so that you can deploy your web app as a docker container. I suppose this is the only solution until Microsoft supports Anaconda on App Service.
| 0.386912 | false | 1 | 5,773 |
2018-10-21 15:08:58.620
|
Why tokenize/preprocess words for language analysis?
|
I am currently working on a Python tweet analyser and part of this will be to count common words. I have seen a number of tutorials on how to do this, and most tokenize the strings of text before further analysis.
Surely it would be easier to avoid this stage of preprocessing and count the words directly from the string - so why do this?
|
Perhaps I'm being overly correct, but doesn't tokenization simply refer to splitting up the input stream (of characters, in this case) based on delimiters to receive whatever is regarded as a "token"?
Your tokens can be arbitrary: you can perform analysis on the word level where your tokens are words and the delimiter is any space or punctuation character. It's just as likely that you analyse n-grams, where your tokens correspond to a group of words and delimiting is done e.g. by sliding a window.
So in short, in order to analyse words in a stream of text, you need to tokenize to receive "raw" words to operate on.
Tokenization however is often followed by stemming and lemmatization to reduce noise. This becomes quite clear when thinking about sentiment analysis: if you see the tokens happy, happily and happiness, do you want to treat them each separately, or wouldn't you rather combine them to three instances of happy to better convey a stronger notion of "being happy"?
| 1.2 | true | 2 | 5,774 |
2018-10-21 15:08:58.620
|
Why tokenize/preprocess words for language analysis?
|
I am currently working on a Python tweet analyser and part of this will be to count common words. I have seen a number of tutorials on how to do this, and most tokenize the strings of text before further analysis.
Surely it would be easier to avoid this stage of preprocessing and count the words directly from the string - so why do this?
|
Tokenization is an easy way of understanding the lexicon/vocabulary in text processing.
A basic first step in analyzing language or patterns in text is to remove symbols/punctuations and stop words. With tokenization you are able to split the large chunks of text to identify and remove text which might not add value, in many cases, stop words like 'the','a','and', etc do not add much value in identifying words of interest.
Word frequencies are also very common in understanding the usage of words in text, Google's Ngram allows for language analysis and plots out the popularity/frequency of a word over the years. If you do not tokenize or split the strings, you will not have a basis to count the words that appear in a text.
Tokenization also allows you to run a more advanced analysis, for example tagging the part of speech or assigning sentiments to certain words. Also for machine learning, texts are mostly preprocessed to convert them to arrays which are used in te different layers of neural networks. Without tokenizing, the inputs will all be too distinct to run any analysis on.
| 0 | false | 2 | 5,774 |
2018-10-23 13:07:01.447
|
Shutdown (a script) one raspberry pi with another raspberry pi
|
I am currently working on a school project. We need to be able to shutdown (and maybe restart) a pythonscript that is running on another raspberry pi using a button.
I thought that the easiest thing, might just be to shutdown the pi from the other pi. But I have no experience on this subject.
I don't need an exact guide (I appreciate all the help I can get) but does anyone know how one might do this?
|
Well first we should ask if the PI you are trying to shutdown is connect to a network ? (LAN or the internet, doesn't matter).
If the answer is yes, you can simply connect to your PI through SSH, and call shutdown.sh.
I don't know why you want another PI, you can do it through any device connected to the same network as your first PI (Wi-Fi or ethernet if LAN, or simply from anytwhere if it's open to the internet).
You could make a smartphone app, or any kind or code that can connect to SSH (all of them).
| 0 | false | 1 | 5,775 |
2018-10-23 15:25:12.317
|
python+docker: docker volume mounting with bad perms, data silently missing
|
I'm running into an issue without volume mounting, combined with the creation of directories in python.
Essentially inside my container, I'm writing to some path /opt/…, and I may have to make the path (which I'm using os.makedirs for)
If I mount a host file path like -v /opt:/opt, with bad "permissions" where the docker container does not seem to be able to write to, the creation of the path inside the container DOES NOT FAIL. The makedirs(P) works, because inside the container, it can make the dir just fine, because it has sudo permissions. However, nothing gets written, silently, on the host at /opt/…. The data just isn't there, but no exception is ever raised.
If I mount a path with proper/open permissions, like -v /tmp:/opt, then the data shows up on the host machine at /tmp/… as expected.
So, how do I not silently fail if there are no write permissions on the host on the left side of the -v argument?\
EDIT: my question is "how do I detect this bad deployment scenario, crash, and fail fast inside the container, if the person who deploys the container, does it wrong"? Just silently not writing data isn't acceptable.
|
The bad mount is root on the host right, and the good mount is the user in the Docker group on the host? Can you check the user/group of the mounted /opt? It should be different than that of /tmp.
| 0 | false | 1 | 5,776 |
2018-10-24 06:17:42.420
|
Building comprehensive scraping program/database for real estate websites
|
I have a project I’m exploring where I want to scrape the real estate broker websites in my country (30-40 websites of listings) and keep the information about each property in a database.
I have experimented a bit with scraping in python using both BeautifulSoup and Scrapy.
What I would Ideally like to achieve is a daily updated database that will find new properties and remove properties when they are sold.
Any pointers as to how to achieve this?
I am relatively new to programming and open to learning different languages and resources if python isn’t suitable.
Sorry if this forum isn’t intended for this kind of vague question :-)
|
Build a scraper and schedule a daily run. You can use scrapy and the daily run will update the database daily.
| 0 | false | 1 | 5,777 |
2018-10-24 09:41:09.793
|
Using convolution layer trained weights for different image size
|
I want to use the first three convolution layers of vgg-16 to generate feature maps.
But i want to use it with variable image size,i.e not imagenet size of 224x224 or 256x256. Such as 480x640or any other randome image dimension.
As convolution layer are independent of image spatial size, how can I use the weights for varying image sizes?
So how do we use the pre-trained weights of vgg-16 upto the first three convolution layers.
Kindly let me know if that is possible.
|
As convolution layer are independent of image size
Actually it's more complicated than that. The kernel itself is independent of the image size because we apply it on each pixel. And indeed, the training of these kernels can be reused.
But this means that the output size is dependent on the image size, because this is the number of nodes that are fed out of the layer for each input pixel. So the dense layer is not adapted to your image, even if the feature extractors are independent.
So you need to preprocess your image to fit into the size of the first layer or you retrain your dense layers from scratch.
When people talk about "transfer-learning" is what people have done in segmentation for decades. You reuse the best feature extractors and then you train a dedicated model with these features.
| 1.2 | true | 1 | 5,778 |
2018-10-24 18:05:05.703
|
Display complex numbers in UI when using wxPython
|
I know complex math and the necessary operations (either "native" Python, or through NumPy). My question has to do with how to display complex numbers in a UI using wxPython. All the questions I found dealing with Python and complex numbers have to do with manipulating complex data.
My original thought was to subclass wx.TextCtrl and override the set and get methods to apply and strip some formatting as needed, and concatenating an i (or j) to the imaginary part.
Am I going down the wrong path? I feel like displaying complex numbers is something that should already be done somewhere.
What would be the recommended pattern for this even when using another UI toolkit, as the problem is similar. Also read my comment below on why I would like to do this.
|
As Brian considered my first comment good advice, and he got no more answers, I am posting it as an answer. Please refer also to the other question comments discussing the issue.
In any UI you display strings and you read strings from the user. Why
would you mix the type to string or string to type translation with
widgets functionality? Get them, convert and use, or "print" them to
string and show the string in the ui.
| 0 | false | 1 | 5,779 |
2018-10-24 21:37:53.237
|
Change file metadata using Apache Beam on a cloud database?
|
Can you change the file metadata on a cloud database using Apache Beam? From what I understand, Beam is used to set up dataflow pipelines for Google Dataflow. But is it possible to use Beam to change the metadata if you have the necessary changes in a CSV file without setting up and running an entire new pipeline? If it is possible, how do you do it?
|
You could code Cloud Dataflow to handle this but I would not. A simple GCE instance would be easier to develop and run the job. An even better choice might be UDF (see below).
There are some guidelines for when Cloud Dataflow is appropriate:
Your data is not tabular and you can not use SQL to do the analysis.
Large portions of the job are parallel -- in other words, you can process different subsets of the data on different machines.
Your logic involves custom functions, iterations, etc...
The distribution of the work varies across your data subsets.
Since your task involves modifying a database, I am assuming a SQL database, it would be much easier and faster to write a UDF to process and modify the database.
| 0 | false | 1 | 5,780 |
2018-10-25 02:44:34.287
|
How to use Tensorflow Keras API
|
Well I start learning Tensorflow but I notice there's so much confusion about how to use this thing..
First, some tutorials present models using low level API tf.varibles, scopes...etc, but other tutorials use Keras instead and for example to use tensor board to invoke callbacks.
Second, what's the purpose of having ton of duplicate API, really what's the purpose behind using high level API like Keras when you have low level to build model like Lego blocks?
Finally, what's the true purpose of using eager execution?
|
You can use these APIs all together. E.g. if you have a regular dense network, but with an special layer you can use higher level API for dense layers (tf.layers and tf.keras) and low level API for your special layer. Furthermore, it is complex graphs are easier to define in low level APIs, e.g. if you want to share variables, etc.
Eager execution helps you for fast debugging, it evaluates tensors directly without a need of invoking a session.
| 0 | false | 1 | 5,781 |
2018-10-25 11:08:14.153
|
Keras flow_from_dataframe wrong data ordering
|
I am using keras's data generator with flow_from_dataframe. for training it works just fine, but when using model.predict_generator on the test set, I discovered that the ordering of the generated results is different than the ordering of the "id" column in my dataframe.
shuffle=False does make the ordering of the generator consistent, but it is a different ordering than the dataframe. I also tried different batch sizes and the corresponding correct steps for the predict_generator function. (for example: batch_Size=1, steps=len(data))
how can I make sure the labels predicted for my test set are ordered in the same way of my dataframe "id" column?
|
While I haven't found a way to decide the order in which the generator produces data, the order can be obtained with the generator.filenames property.
| 1.2 | true | 1 | 5,782 |
2018-10-25 15:16:07.853
|
Write python functions to operate over arbitrary axes
|
I've been struggling with this problem in various guises for a long time, and never managed to find a good solution.
Basically if I want to write a function that performs an operation over a given, but arbitrary axis of an arbitrary rank array, in the style of (for example) np.mean(A,axis=some_axis), I have no idea in general how to do this.
The issue always seems to come down to the inflexibility of the slicing syntax; if I want to access the ith slice on the 3rd index, I can use A[:,:,i], but I can't generalise this to the nth index.
|
numpy functions use several approaches to do this:
transpose axes to move the target axis to a known position, usually first or last; and if needed transpose the result
reshape (along with transpose) to reduce the problem simpler dimensions. If your focus is on the n'th dimension, it might not matter where the (:n) dimension are flattened or not. They are just 'going along for the ride'.
construct an indexing tuple. idx = (slice(None), slice(None), j); A[idx] is the equivalent of A[:,:,j]. Start with a list or array of the right size, fill with slices, fiddle with it, and then convert to a tuple (tuples are immutable).
Construct indices with indexing_tricks tools like np.r_, np.s_ etc.
Study code that provides for axes. Compiled ufuncs won't help, but functions like tensordot, take_along_axis, apply_along_axis, np.cross are written in Python, and use one or more of these tricks.
| 1.2 | true | 1 | 5,783 |
2018-10-25 15:26:46.793
|
Highly variable execution times in Cython functions
|
I have a performance measurement issue while executing a migration to Cython from C-compiled functions (through scipy.weave) called from a Python engine.
The new cython functions profiled end-to-end with cProfile (if not necessary I won't deep down in cython profiling) record cumulative measurement times highly variable.
Eg. the cumulate time of a cython function executed 9 times per 5 repetitions (after a warm-up of 5 executions - not took in consideration by the profiling function) is taking:
in a first round 215,627339 seconds
in a second round 235,336131 seconds
Each execution calls the functions many times with different, but fixed parameters.
Maybe this variability could depends on CPU loads of the test machine (a cloud-hosted dedicated one), but I wonder if such a variability (almost 10%) could depend someway by cython or lack of optimization (I already use hints on division, bounds check, wrap-around, ...).
Any idea on how to take reliable metrics?
|
First of all, you need to ensure that your measurement device is capable of measuring what you need: specifically, only the system resources you consume. UNIX's utime is one such command, although even that one still includes swap time. Check the documentation of your profiler: it should have capabilities to measure only the CPU time consumed by the function. If so, then your figures are due to something else.
Once you've controlled the external variations, you need to examine the internal. You've said nothing about the complexion of your function. Some (many?) functions have available short-cuts for data-driven trivialities, such as multiplication by 0 or 1. Some are dependent on an overt or covert iteration that varies with the data. You need to analyze the input data with respect to the algorithm.
One tool you can use is a line-oriented profiler to detail where the variations originate; seeing which lines take the extra time should help determine where the "noise" comes from.
| 0.201295 | false | 2 | 5,784 |
2018-10-25 15:26:46.793
|
Highly variable execution times in Cython functions
|
I have a performance measurement issue while executing a migration to Cython from C-compiled functions (through scipy.weave) called from a Python engine.
The new cython functions profiled end-to-end with cProfile (if not necessary I won't deep down in cython profiling) record cumulative measurement times highly variable.
Eg. the cumulate time of a cython function executed 9 times per 5 repetitions (after a warm-up of 5 executions - not took in consideration by the profiling function) is taking:
in a first round 215,627339 seconds
in a second round 235,336131 seconds
Each execution calls the functions many times with different, but fixed parameters.
Maybe this variability could depends on CPU loads of the test machine (a cloud-hosted dedicated one), but I wonder if such a variability (almost 10%) could depend someway by cython or lack of optimization (I already use hints on division, bounds check, wrap-around, ...).
Any idea on how to take reliable metrics?
|
I'm not a performance expert but from my understanding the thing you should be measuring would be the average time it take per execution not the cumulative time? Other than that is your function doing any like reading from disk and/or making network requests?
| 0 | false | 2 | 5,784 |
2018-10-25 20:43:10.730
|
Kernel size change in convolutional neural networks
|
I have been working on creating a convolutional neural network from scratch, and am a little confused on how to treat kernel size for hidden convolutional layers. For example, say I have an MNIST image as input (28 x 28) and put it through the following layers.
Convolutional layer with kernel_size = (5,5) with 32 output channels
new dimension of throughput = (32, 28, 28)
Max Pooling layer with pool_size (2,2) and step (2,2)
new dimension of throughput = (32, 14, 14)
If I now want to create a second convolutional layer with kernel size = (5x5) and 64 output channels, how do I proceed? Does this mean that I only need two new filters (2 x 32 existing channels) or does the kernel size change to be (32 x 5 x 5) since there are already 32 input channels?
Since the initial input was a 2D image, I do not know how to conduct convolution for the hidden layer since the input is now 3 dimensional (32 x 14 x 14).
|
you need 64 kernel, each with the size of (32,5,5) .
depth(#channels) of kernels, 32 in this case, or 3 for a RGB image, 1 for gray scale etc, should always match the input depth, but values are all the same.
e.g. if you have a 3x3 kernel like this : [-1 0 1; -2 0 2; -1 0 1] and now you want to convolve it with an input with N as depth or say channel, you just copy this 3x3 kernel N times in 3rd dimension, the following math is just like the 1 channel case, you sum all values in all N channels which your kernel window is currently on them after multiplying the kernel values with them and get the value of just 1 entry or pixel. so what you get as output in the end is a matrix with 1 channel:) how much depth you want your matrix for next layer to have? that's the number of kernels you should apply. hence in your case it would be a kernel with this size (64 x 32 x 5 x 5) which is actually 64 kernels with 32 channels for each and same 5x5 values in all cahnnels.
("I am not a very confident english speaker hope you get what I said, it would be nice if someone edit this :)")
| 0 | false | 1 | 5,785 |
2018-10-25 21:40:47.257
|
Python: I can not get pynput to install
|
I'm trying to run a program with pynput. I tried installing it through terminal on Mac with pip. However, it still says it's unresolved on my ide PyCharm. Does anyone have any idea of how to install this?
|
I have three theories, but first: make sure it is installed by running python -c "import pynput"
JetBrain's IDEs typically do not scan for package updates, so try restarting the IDE.
JetBrain's IDE might configure a python environment for you, this might cause you to have to manually import it in your run configuration.
You have two python versions installed and you installed the package on the opposite version you run script on.
I think either 1 or 3 is the most likely.
| 0 | false | 1 | 5,786 |
2018-10-26 07:04:15.230
|
How to get the dimension of tensors at runtime?
|
I can get the dimensions of tensors at graph construction time via manually printing shapes of tensors(tf.shape()) but how to get the shape of these tensors at session runtime?
The reason that I want shape of tensors at runtime is because at graph construction time shape of some tensors is coming as (?,8) and I cannot deduce the first dimension then.
|
You have to make the tensors an output of the graph. For example, if showme_tensor is the tensor you want to print, just run the graph like that :
_showme_tensor = sess.run(showme_tensor)
and then you can just print the output as you print a list. If you have different tensors to print, you can just add them like that :
_showme_tensor_1, _showme_tensor_2 = sess.run([showme_tensor_1, showme_tensor_2])
| 0 | false | 1 | 5,787 |
2018-10-27 10:53:32.190
|
python - pandas dataframe to powerpoint chart backend
|
I have a pandas dataframe result which stores a result obtained from a sql query. I want to paste this result onto the chart backend of a specified chart in the selected presentation. Any idea how to do this?
P.S. The presentation is loaded using the module python-pptx
|
you will need to read a bit about python-pptx.
You need chart's index and slide index of the chart. Once you know them
get your chart object like this->
chart = presentation.slides[slide_index].shapes[shape_index].chart
replacing data
chart.replace_data(new_chart_data)
reset_chart_data_labels(chart)
then when you save your presentation it will have updated the data.
usually, I uniquely name all my slides and charts in a template and then I have a function that will get me the chart's index and slide's index. (basically, I iterate through all slides, all shapes, and find a match for my named chart).
Here is a screenshot where I name a chart->[![screenshot][1]][1]. Naming slides is a bit more tricky and I will not delve into that but all you need is slide_index just count the slides 0 based and then you have the slide's index.
[1]: https://i.stack.imgur.com/aFQwb.png
| 0 | false | 1 | 5,788 |
2018-10-31 22:26:56.993
|
How to make Flask app up and running after server restart?
|
What is the recommended way to run Flask app (e.g. via Gunicorn?) and how to make it up and running automatically after linux server (redhat) restart?
Thanks
|
have you looked at supervisord? it works reasonably well and handles restarting processes automatically if they fail as well as looking after error logs nicely
| 0 | false | 1 | 5,789 |
2018-11-01 03:08:27.057
|
cv2 show video stream & add overlay after another function finishes
|
I am current working on a real time face detection project.
What I have done is that I capture the frame using cv2, do detection and then show result using cv2.imshow(), which result in a low fps.
I want a high fps video showing on the screen without lag and a low fps detection bounding box overlay.
Is there a solution to show the real time video stream (with the last detection result bounding box), and once a new detection is finished, show the new bounding box and the background was not delayed by the detection function.
Any help is appreciated!
Thanks!
|
A common approach would be to create a flag that allows the detection algorithim to only run once every couple of frames and save the predicted reigons of interest to a list, whilst creating bounding boxes for every frame.
So for example you have a face detection algorithim, process every 15th frame to detect faces, but in every frame create a bounding box from the predictions. Even though the predictions get updated every 15 frames.
Another approach could be to add an object tracking layer. Run your heavy algorithim to find the ROIs and then use the object tracking library to hold on to them till the next time it runs the detection algorithim.
Hope this made sense.
| 1.2 | true | 1 | 5,790 |
2018-11-01 07:22:44.353
|
What Is the Correct Mimetype (in and out) for a .Py File for Google Drive?
|
I have a script that uploads files to Google Drive. I want to upload python files. I can do it manually and have it keep the file as .py correctly (and it's previewable), but no matter what mimetypes I try, I can't get my program to upload it correctly. It can upload the file as a .txt or as something GDrive can't recognize, but not as a .py file. I can't find an explicit mimetype for it (I found a reference for text/x-script.python but it doesn't work as an out mimetype).
Does anyone know how to correctly upload a .py file to Google Drive using REST?
|
Also this is a valid Python mimetype: text/x-script.python
| -0.201295 | false | 1 | 5,791 |
2018-11-01 09:31:15.857
|
Running a python file in windows after removing old python files
|
So I am running python 3.6.5 on a school computer the most things are heavily restricted to do on a school computer and i can only use python on drive D. I cannot use batch either. I had python 2.7 on it last year until i deleted all the files and installed python 3.6.5 after that i couldn't double click on a .py file to open it as it said continue using E:\Python27\python(2.7).exe I had the old python of a USB which is why it asks this but know i would like to change that path the the new python file so how would i do that in windows
|
Just open your Python IDE and open the file manually.
| 0 | false | 1 | 5,792 |
2018-11-01 22:25:51.750
|
GROUPBY with showing all the columns
|
I want to do a groupby of my MODELS by CITYS with keeping all the columns where i can print the percentage of each MODELS IN THIS CITY.
I put my dataframe in PHOTO below.
And i have written this code but i don"t know how to do ??
for name,group in d_copy.groupby(['CITYS'])['MODELS']:
|
Did you try this : d_copy.groupby(['CITYS','MODELS']).mean() to have the average percentage of a model by city.
Then if you want to catch the percentages you have to convert it in DF and select the column : pd.DataFrame(d_copy.groupby(['CITYS','MODELS']).mean())['PERCENTAGE']
| 0 | false | 1 | 5,793 |
2018-11-03 05:34:23.617
|
Google Data Studio Connector and App Scripts
|
I am working on a project for a client in which I need to load a lot of data into data studio. I am having trouble getting the deployment to work with my REST API.
The API has been tested with code locally but I need to know how to make it compatible with the code base in App Scripts. Has anyone else had experience with working around this? The endpoint is a Python Flask application.
Also, is there a limit on the amount of data that you can dump in a single response to the Data Studio? As a solution to my needs(needing to be able to load data for 300+ accounts) I have created a program that caches the data needed from each account and returns the whole payload at once. There are a lot of entries, so I was wondering if they had a limit to what can be uploaded at once.
Thank you in advance
|
I found the issue, it was a simple case of forgetting to add the url to the whitelist.
| 0.386912 | false | 1 | 5,794 |
2018-11-03 15:56:12.343
|
Multi-Line Combobox in Tkinter
|
Is it possible to have a multi-line text entry field with drop down options?
I currently have a GUI with a multi-line Text widget where the user writes some comments, but I would like to have some pre-set options for these comments that the user can hit a drop-down button to select from.
As far as I can tell, the Combobox widget does not allow changing the height of the text-entry field, so it is effectively limited to one line (expanding the width arbitrarily is not an option). Therefore, what I think I need to do is sub-class the Text widget and somehow add functionality for a drop down to show these (potentially truncated) pre-set options.
I foresee a number of challenges with this route, and wanted to make sure I'm not missing anything obvious with the existing built-in widgets that could do what I need.
|
I don't think you are missing anything. Note that ttk.Combobox is a composite widget. It subclasses ttk.Entry and has ttk.Listbox attached.
To make multiline equivalent, subclass Text. as you suggested. Perhaps call it ComboText. Attach either a frame with multiple read-only Texts, or a Text with multiple entries, each with a separate tag. Pick a method to open the combotext and methods to close it, with or without copying a selection into the main text. Write up an initial doc describing how to operate the thing.
| 0.201295 | false | 1 | 5,795 |
2018-11-04 15:50:14.623
|
Apache - if file does not exist, run script to create it, then serve it
|
How can I get this to happen in Apache (with python, on Debian if it matters)?
User submits a form
Based on the form entries I calculate which html file to serve them (say 0101.html)
If 0101.html exists, redirect them directly to 0101.html
Otherwise, run a script to create 0101.html, then redirect them to it.
Thanks!
Edit: I see there was a vote to close as too broad (though no comment or suggestion). I am just looking for a minimum working example of the Apache configuration files I would need. If you want the concrete way I think it will be done, I think apache just needs to check if 0101.html exists, if so serve it, otherwise run cgi/myprogram.py with input argument 0101.html. Hope this helps. If not, please suggest how I can make it more specific. Thank you.
|
Apache shouldn't care. Just serve a program that looks for the file. If it finds it it will read it (or whatever and) return results and if it doesn't find it, it will create and return the result. All can be done with a simple python file.
| 1.2 | true | 1 | 5,796 |
2018-11-04 18:53:52.133
|
AWS CLI upload failed: unknown encoding: idna
|
I am trying to push some files up to s3 with the AWS CLI and I am running into an error:
upload failed: ... An HTTP Client raised and unhandled exception: unknown encoding: idna
I believe this is a Python specific problem but I am not sure how to enable this type of encoding for my python interpreter. I just freshly installed Python 3.6 and have verified that it being used by powershell and cmd.
$> python --version
Python 3.6.7
If this isn't a Python specific problem, it might be helpful to know that I also just freshly installed the AWS CLI and have it properly configured. Let me know if there is anything else I am missing to help solve this problem. Thanks.
|
I had the same problem in Windows.
After investigating the problem, I realized that the problem is in the aws-cli installed using the MSI installer (x64). After removing "AWS Command Line Interface" from the list of installed programs and installing aws-cli using pip, the problem was solved.
I also tried to install MSI installer x32 and the problem was missing.
| 1.2 | true | 2 | 5,797 |
2018-11-04 18:53:52.133
|
AWS CLI upload failed: unknown encoding: idna
|
I am trying to push some files up to s3 with the AWS CLI and I am running into an error:
upload failed: ... An HTTP Client raised and unhandled exception: unknown encoding: idna
I believe this is a Python specific problem but I am not sure how to enable this type of encoding for my python interpreter. I just freshly installed Python 3.6 and have verified that it being used by powershell and cmd.
$> python --version
Python 3.6.7
If this isn't a Python specific problem, it might be helpful to know that I also just freshly installed the AWS CLI and have it properly configured. Let me know if there is anything else I am missing to help solve this problem. Thanks.
|
Even I was facing same issue. I was running it on Windows server 2008 R2. I was trying to upload around 500 files to s3 using below command.
aws s3 cp sourcedir s3bucket --recursive --acl
bucket-owner-full-control --profile profilename
It works well and uploads almost all files, but for initial 2 or 3 files, it used to fail with error: An HTTP Client raised and unhandled exception: unknown encoding: idna
This error was not consistent. The file for which upload failed, it might succeed if I try to run it again. It was quite weird.
Tried on trial and error basis and it started working well.
Solution:
Uninstalled Python 3 and AWS CLI.
Installed Python 2.7.15
Added python installed path in environment variable PATH. Also added pythoninstalledpath\scripts to PATH variable.
AWS CLI doesnt work well with MS Installer on Windows Server 2008, instead used PIP.
Command:
pip install awscli
Note: for pip to work, do not forget to add pythoninstalledpath\scripts to PATH variable.
You should have following version:
Command:
aws --version
Output: aws-cli/1.16.72 Python/2.7.15 Windows/2008ServerR2 botocore/1.12.62
Voila! The error is gone!
| -0.16183 | false | 2 | 5,797 |
2018-11-05 10:20:35.477
|
Calling a Python function from HTML
|
Im writing a webapplication, where im trying to display the connected USB devices. I found a Python function that does exactly what i want but i cant really figure out how to call the function from my HTML code, preferably on the click of a button.
|
simple answer: you can't. the code would have to be run client-side, and no browser would execute potentially malicious code automatically (and not every system has a python interpreter installed).
the only thing you can execute client-side (without the user taking action, e.g. downloading a program or browser add-on) is javascript.
| 1.2 | true | 1 | 5,798 |
2018-11-05 18:11:03.353
|
How to create Graphql server for microservices?
|
We have several microservices on Golang and Python, On Golang we are writing finance operations and on Python online store logic, we want to create one API for our front-end and we don't know how to do it.
I have read about API gateway and would it be right if Golang will create its own GraphQL server, Python will create another one and they both will communicate with the third graphql server which will generate API for out front-end.
|
I do not know much details about your services, but great pattern I successfully used on different projects is as you mentioned GraphQL gateway.
You will create one service, I prefer to create it in Node.js where all requests from frontend will coming through. Then from GraphQL gateway you will request your microservices. This will be basically your only entry point into the backend system. Requests will be authenticated and you are able to unify access to your data and perform also some performance optimizations like implementing data loader's caching and batching to mitigate N+1 problem. In addition you will reduce complexity of having multiple APIs and leverage all the GraphQL benefits.
On my last project we had 7 different frontends and each was using the same GraphQL gateway and I was really happy with our approach. There are definitely some downsides as you need to keep in sync all your frontends and GraphQL gateway, therefore you need to be more aware of your breaking changes, but it is solvable with for example deprecated directive and by performing blue/green deployment with Kubernetes cluster.
The other option is to create the so-called backend for frontend in GraphQL. Right now I do not have enough information which solution would be best for you. You need to decide based on your frontend needs and business domain, but usually I prefer GraphQL gateway as GraphQL has great flexibility and the need to taylor your API to frontend is covered by GraphQL capabilities. Hope it helps David
| 1.2 | true | 1 | 5,799 |
2018-11-05 18:14:16.803
|
What should be the 5th dimension for the input to 3D-CNN while working with hyper-spectral images?
|
I have a hyperspectral image having dimension S * S * L where S*S is the spatial size and L denotes the number of spectral bands.
Now the shape of my X (image array) is: (1, 145, 145, 200) where 1 is the number of examples, 145 is the length and width of the image and 200 is no. of channels of the image.
I want to input this small windows of this image (having dimension like W * W * L; W < S) into a 3D CNN, but for that, I need to have 5 dimensions in the following format: (batch, length, width, depth, channels).
It seems to me I am missing one of the spatial dimensions, how do I convert my image array into a 5-dimensional array without losing any information?
I am using python and Keras for the above.
|
What you want is a 2D CNN, not a 3D one. A 2D CNN already supports multiple channels, so you should have no problem using it with a hyperspectral image.
| 0.201295 | false | 2 | 5,800 |
2018-11-05 18:14:16.803
|
What should be the 5th dimension for the input to 3D-CNN while working with hyper-spectral images?
|
I have a hyperspectral image having dimension S * S * L where S*S is the spatial size and L denotes the number of spectral bands.
Now the shape of my X (image array) is: (1, 145, 145, 200) where 1 is the number of examples, 145 is the length and width of the image and 200 is no. of channels of the image.
I want to input this small windows of this image (having dimension like W * W * L; W < S) into a 3D CNN, but for that, I need to have 5 dimensions in the following format: (batch, length, width, depth, channels).
It seems to me I am missing one of the spatial dimensions, how do I convert my image array into a 5-dimensional array without losing any information?
I am using python and Keras for the above.
|
If you want to convolve along the dimension of your channels, you should add a singleton dimension in the position of channel. If you don't want to convolve along the dimension of your channels, you should use a 2D CNN.
| 1.2 | true | 2 | 5,800 |
2018-11-06 05:41:57.087
|
Family tree in Python
|
I need to model a four generational family tree starting with a couple. After that if I input a name of a person and a relation like 'brother' or 'sister' or 'parent' my code should output the person's brothers or sisters or parents. I have a fair bit of knowledge of python and self taught in DSA. I think I should model the data as a dictionary and code for a tree DS with two root nodes(i.e, the first couple). But I am not sure how to start. I just need to know how to start modelling the family tree and the direction of how to proceed to code. Thank you in advance!
|
There's plenty of ways to skin a cat, but I'd suggest to create:
A Person class which holds relevant data about the individual (gender) and direct relationship data (parents, spouse, children).
A dictionary mapping names to Person elements.
That should allow you to answer all of the necessary questions, and it's flexible enough to handle all kinds of family trees (including non-tree-shaped ones).
| 0.999909 | false | 1 | 5,801 |
2018-11-06 07:03:33.130
|
Tensorflow MixtureSameFamily and gaussian mixture model
|
I am really new to Tensorflow as well as gaussian mixture model.
I have recently used tensorflow.contrib.distribution.MixtureSameFamily class for predicting probability density function which is derived from gaussian mixture of 4 components.
When I plotted the predicted density function using "prob()" function as Tensorflow tutorial explains, I found the plotted pdf with only one mode. I expected to see 4 modes as the mixture components are 4.
I would like to ask whether Tensorflow uses any global mode predicting algorithm in their MixtureSameFamily class. If not, I would also like to know how MixtureSameFamily class forms the pdf with statistical values.
Thank you very much.
|
I found an answer for above question thanks to my collegue.
The 4 components of gaussian mixture have had very similar means that the mixture seems like it has only one mode.
If I put four explicitly different values as means to the MixtureSameFamily class, I could get a plot of gaussian mixture with 4 different modes.
Thank you very much for reading this.
| 0 | false | 1 | 5,802 |
2018-11-07 04:43:09.720
|
How to run pylint plugin in Intellij IDEA?
|
I have installed pylint plugin and restarted the Intellij IDEA. It is NOT external tool (so please avoid providing answers on running as an external tool as I know how to).
However I have no 'pylint' in the tool menu or the code menu.
Is it invoked by running 'Analyze'? or is there a way to run the pylint plugin on py files?
|
This is for the latest IntelliJ IDEA version 2018.3.5 (Community Edition):
Type "Command ," or click "IntelliJ IDEA -> Preferences..."
From the list on the left of the popped up window select "Plugins"
Make sure that on the right top the first tab "Marketplace" is picked if it's not
Search for "Pylint" and when the item is found, click the greed button "Install" associated with the found item
The plugin should then be installed properly.
One can then turn on/off real-time Pylint scan via the same window by navigating in the list on the left: "Editor -> Inspections", then in the list on the right unfolding "Pylint" and finally checking/unchecking the corresponding checkbox on the right of the unfolded item.
One can also in the same window go the very last top-level item within the list on the left named "Other Settings" and unfold it.
Within it there's an item called "Pylint", click on it.
On the top right there should be a button "Test", click on it.
If in a few seconds to the left of the "Test" text there appears a green checkmark, then Pylint is installed correctly.
Finally, to access the actual Pylint window, click "View"->"Tool Windows"->"Pylint"!
Enjoy!
| 0.999909 | false | 1 | 5,803 |
2018-11-08 02:59:54.810
|
nltk bags of words showing emotions
|
i am working on NLP using python and nltk.
I was wondering whether is there any dataset which have bags of words which shows keywords relating to emotions such as happy, joy, anger, sadness and etc
from what i dug up in the nltk corpus, i see there are some sentiment analysis corpus which contain positive and negative review which doesn't exactly related to keywords showing emotions.
Is there anyway which i could build my own dictionary containing words which shows emotion for this purpose? is so, how do i do it and is there any collection of such words?
Any help would be greatly appreciated
|
I'm not aware of any dataset that associates sentiments to keywords, but you can easily built one starting from a generic sentiment analysis dataset.
1) Clean the datasets from the stopwords and all the terms that you don't want to associate to a sentiment.
2)Compute the count of each words in the two sentiment classes and normalize it. In this way you will associate a probability to each word to belong to a class. Let's suppose that you have 300 times the word "love" appearing in the positive sentences and the same word appearing 150 times in the negative sentences. Normalizing you have that the word "love" belongs with a probability of 66% (300/(150+300)) to the positive class and 33% to the negative one.
3) In order to make the dictionary more robust to the borderline terms you can set a threshold to consider neutral all the words with the max probability lower than the threshold.
This is an easy approach to build the dictionary that you are looking for. You could use more sophisticated approach as Term Frequency-Inverse Document Frequency.
| 0 | false | 1 | 5,804 |
2018-11-09 01:48:39.963
|
Operating the Celery Worker in the ECS Fargate
|
I am working on a project using AWS ECS. I want to use Celery as a distributed task queue. Celery Worker can be build up as EC2 type, but because of the large amount of time that the instance is in the idle state, I think it would be cost-effective for AWS Fargate to run the job and quit immediately.
Do you have suggestions on how to use the Celery Worker efficiently in the AWS cloud?
|
Fargate launch type is going to take longer to spin up than EC2 launch type, because AWS is doing all the "host things" for you when you start the task, including the notoriously slow attaching of an ENI, and likely downloading the image from a Docker repo. Right now there's no contest, EC2 launch type is faster every time.
So it really depends on the type of work you want the workers to do. You can expect a new Fargate task to take a few minutes to enter a RUNNING state for the aforementioned reasons. EC2 launch, on the other hand, because the ENI is already in place on your host and the image is already downloaded (at best) or mostly downloaded (likely worst), will move from PENDING to RUNNING very quickly.
Use EC2 launch type for steady workloads, use Fargate launch type for burst capacity
This is the current prevailing wisdom, often discussed as a cost factor because Fargate can't take advantage of the typical EC2 cost savings mechanisms like reserved instances and spot pricing. It's expensive to run Fargate all the time, compared to EC2.
To be clear, it's perfectly fine to run 100% in Fargate (we do), but you have to be willing to accept the downsides of doing that - slower scaling and cost.
Note you can run both launch types in the same cluster. Clusters are logical anyway, just a way to organize your resources.
Example cluster
This example shows a static EC2 launch type service running 4 celery tasks. The number of tasks, specs, instance size and all doesn't really matter, do it up however you like. The important thing is - EC2 launch type service doesn't need to scale; the Fargate launch type service is able to scale from nothing running (during periods where there's little or no work to do) to as many workers as you can handle, based on your scaling rules.
EC2 launch type Celery service
Running 1 EC2 launch type t3.medium (2vcpu/4GB).
Min tasks: 2, Desired: 4, Max tasks: 4
Running 4 celery tasks at 512/1024 in this EC2 launch type.
No scaling policies
Fargate launch type Celery service
Min tasks: 0, Desired: (x), Max tasks: 32
Running (x) celery tasks (same task def as EC2 launch type) at 512/1024
Add scaling policies to this service
| 1.2 | true | 1 | 5,805 |
2018-11-09 07:20:23.930
|
how do I insert some rows that I select from remote MySQL database to my local MySQL database
|
My remote MySQL database and local MySQL database have the same table structure, and the remote and local MySQL database is utf-8charset.
|
You'd better merge value and sql template string and print it , make sure the sql is correct.
| 0 | false | 1 | 5,806 |
2018-11-09 16:42:21.617
|
Run external Python script that could only read/write only a subset of main app variables
|
I have a Python application that simulates the behaviour of a system, let's say a car.
The application defines a quite large set of variables, some corresponding to real world parameters (the remaining fuel volume, the car speed, etc.) and others related to the simulator internal mechanics which are of no interest to the user.
Everything works fine, but currently the user can have no interaction with the simulation whatsoever during its execution: she just sets simulation parameters, lauchs the simulation, and waits for its termination.
I'd like the user (i.e. not the creator of the application) to be able to write Python scripts, outside of the app, that could read/write the variables associated with the real world parameters (and only these variables).
For instance, at t=23s (this condition I know how to check for), I'd like to execute user script gasLeak.py, that reads the remaining fuel value and sets it to half its current value.
To sum up, how is it possible, from a Python main app, to execute user-written Python scripts that can access and modifiy only a pre-defined subset of the main script variables. In a perfect world, I'd also like that modifications applied to user scripts during the running of the app to be taken into account without having to restart said app (something along the reloading of a module).
|
Make the user-written scripts read command-line arguments and print to stdout. Then you can call them with the subprocess module with the variables they need to know about as arguments and read their responses with subprocess.check_output.
| 0 | false | 1 | 5,807 |
2018-11-09 23:03:45.930
|
pytest-xdist generate random & uniqe ports for each test
|
I'm using pytest-xdist plugin to run some test using the @pytest.mark.parametrize to run the same test with different parameters.
As part of these tests, I need to open/close web servers and the ports are generated at collection time.
xdist does the test collection on the slave and they are not synchronised, so how can I guarantee uniqueness for the port generation.
I can use the same port for each slave but I don't know how to archive this.
|
I figured that I did not give enough information regarding my issue.
What I did was to create one parameterized test using @pytest.mark.parametrize and before the test, I collect the list of parameters, the collection query a web server and receive a list of "jobs" to process.
Each test contains information on a port that he needs to bind to, do some work and exit because the tests are running in parallel I need to make sure that the ports will be different.
Eventually, I make sure that the job ids will be in the rand on 1024-65000 and used that for the port.
| 1.2 | true | 1 | 5,808 |
2018-11-10 23:45:59.803
|
how to detect if photo is mostly a document?
|
I think i am looking for something simpler than detecting a document boundaries in a photo. I am only trying to flag photos which are mostly of documents rather than just a normal scene photo. is this an easier problem to solve?
|
Are the documents mostly white? If so, you could analyse the images for white content above a certain percentage. Generally text documents only have about 10% printed content on them in total.
| 0 | false | 1 | 5,809 |
2018-11-11 14:15:01.157
|
Sending data to Django backend from RaspberryPi Sensor (frequency, bulk-update, robustness)
|
I’m currently working on a Raspberry Pi/Django project slightly more complex that i’m used to. (i either do local raspberry pi projects, or simple Django websites; never the two combined!)
The idea is two have two Raspberry Pi’s collecting information running a local Python script, that would each take input from one HDMI feed (i’ve got all that part figured out - I THINK) using image processing. Now i want these two Raspberry Pi’s (that don’t talk to each other) to connect to a backend server that would combine, store (and process) the information gathered by my two Pis
I’m expecting each Pi to be working on one frame per second, comparing it to the frame a second earlier (only a few different things he is looking out for) isolate any new event, and send it to the server. I’m therefore expecting no more than a dozen binary timestamped data points per second.
Now what is the smart way to do it here ?
Do i make contact to the backend every second? Every 10 seconds?
How do i make these bulk HttpRequests ? Through a POST request? Through a simple text file that i send for the Django backend to process? (i have found some info about “bulk updates” for django but i’m not sure that covers it entirely)
How do i make it robust? How do i make sure that all data what successfully transmitted before deleting the log locally ? (if one call fails for a reason, or gets delayed, how do i make sure that the next one compensates for lost info?
Basically, i’m asking advise for making a IOT based project, where a sensor gathers bulk information and want to send it to a backend server for processing, and how should that archiving process be designed.
PS: i expect the image processing part (at one fps) to be fast enough on my Pi Zero (as it is VERY simple); backlog at that level shouldn’t be an issue.
PPS: i’m using a django backend (even if it seems a little overkill)
a/ because i already know the framework pretty well
b/ because i’m expecting to build real-time performance indicators from the combined data points gathered, using django, and displaying them in (almost) real-time on a webpage.
Thank you very much !
|
This partly depends on just how resilient you need it to be. If you really can't afford for a single update to be lost, I would consider using a message queue such as RabbitMQ - the clients would add things directly to the queue and the server would pop them off in turn, with no need to involve HTTP requests at all.
Otherwise it would be much simpler to just POST each frame's data in some serialized format (ie JSON) and Django would simply deserialize and iterate through the list, saving each entry to the db. This should be fast enough for the rate you describe - I'd expect saving a dozen db entries to take significantly less than half a second - but this still leaves the problem of what to do if things get hung up for some reason. Setting a super-short timeout on the server will help, as would keeping the data to be posted until you have confirmation that it has been saved - and creating unique IDs in the client to ensure that the request is idempotent.
| 0.673066 | false | 1 | 5,810 |
2018-11-12 08:56:25.160
|
run python from Microsoft Dynamics
|
I know i can access a Dynamics instance from a python script by using the oData API, but what about the other way around? Is it possible to somehow call a python script from within Dynamics and possible even pass arguments?
Would this require me to use custom js/c#/other code within Dynamics?
|
You won't be able to nativley execute a python script within Dynamics.
I would approach this by placing the Python script in a service that can be called via a web service call from Dynamics. You could make the call from form JavaScript or a Plugin using C#.
| 1.2 | true | 1 | 5,811 |
2018-11-12 20:04:05.643
|
Extracting URL from inside docx tables
|
I'm pretty much stuck right now.
I wrote a parser in python3 using the python-docx library to extract all tables found in an existing .docx and store it in a python datastructure.
So far so good. Works as it should.
Now I have the problem that there are hyperlinks in these tables which I definitely need! Due to the structure (xml underneath) the docx library doesn't catch these. Neither the url nor the display text provided. I found many people having similar concerns about this, but most didn't seem to have 'just that' dilemma.
I thought about unpacking the .docx and scan the _ref document for the corresponding 'rid' and fill the actual data I have with the links found in the _ref xml.
Either way it seems seriously weary to do it that way, so I was wondering if there is a more pythonic way to do it or if somebody got good advise how to tackle this problem?
|
You can extract the links by parsing xml of docx file.
You can extract all text from the document by using document.element.getiterator()
Iterate all the tags of xml and extract its text. You will get all the missing data which python-docx failed to extract.
| 0 | false | 1 | 5,812 |
2018-11-12 23:39:45.557
|
openpyxl how to read formula result after editing input data on the sheet? data_only=True gives me a "None" result
|
Using openpyxl, I'm able to read 2 numbers on a sheet, and also able to read their sum by loading the sheet with data_only=True.
However, when I alter the 2 numbers using openpyxl and then try to read the answer using data_only=True, it returns no output. How do I do this?
|
You can have either the value or the formula in openpyxl. It is precisely to avoid the confusion that this kind of edit could introduce that the library works like this. To evaluate the changed formulae you'll need to load the file in an app like MS Excel or LibreOffice that can evaluate the formulae and store the results.
| 0.135221 | false | 1 | 5,813 |
2018-11-13 01:35:08.450
|
inception v3 using tf.data?
|
I'm using a bit of code that is derived from inception v3 as distributed by the Google folks, but it's now complaining that the queue runners used to read the data are deprecated (tf.train.string_input_producer in image_processing.py, and similar). Apparently I'm supposed to switch to tf.data for this kind of stuff.
Unfortunately, the documentation on tf.data isn't doing much to relieve my concern that I've got too much data to fit in memory, especially given that I want to batch it in a reusable way, etc. I'm confident that the tf.data stuff can do this; I just don't know how to do it. Can anyone point me to a full example of code that uses tf.data to deal with batches of data that won't all fit in memory? Ideally, it would simply be an updated version of the inception-v3 code, but I'd be happy to try and work with anything. Thanks!
|
Well, I eventually got this working. The various documents referenced in the comment on my question had what I needed, and I gradually figured out which parameters passed to queuerunners corresponded to which parameters in the tf.data stuff.
There was one gotcha that took a while for me to sort out. In the inception implementation, the number of examples used for validation is rounded up to be a multiple of the batch size; presumably the validation set is reshuffled and some examples are used more than once. (This does not strike me as great practice, but generally the number of validation instances is way larger than the batch size, so only a relative few are double counted.)
In the tf.data stuff, enabling shuffling and reuse is a separate thing and I didn't do it on the validation data. Then things broke because there weren't enough unique validation instances, and I had to track that down.
I hope this helps the next person with this issue. Unfortunately, my code has drifted quite far from Inception v3 and I doubt that it would be helpful for me to post my modification. Thanks!
| 0.386912 | false | 1 | 5,814 |
2018-11-13 20:39:25.877
|
how to reformat a text paragrath using python
|
Hi I was wondering how I could format a large text file by adding line breaks after certain characters or words. For instance, everytime a comma was in the paragraph could I use python to make this output an extra linebreak.
|
you can use the ''.replace() method like so:
'roses can be blue, red, white'.replace(',' , ',\n') gives
'roses can be blue,\n red,\n white' efectively inserting '\n' after every ,
| 0 | false | 1 | 5,815 |
2018-11-14 23:48:25.957
|
Python detecting different extensions on files
|
How do i make python listen for changes to a folder on my desktop, and every time a file was added, the program would read the file name and categorize it it based on the extension?
This is a part of a more detailed program but I don't know how to get started on this part. This part of the program detects when the user drags a file into a folder on his/her desktop and then moves that file to a different location based on the file extension.
|
Periodically read the files in the folder and compare to a set of files remaining after the last execution of your script. Use os.listdir() and isfile().
Read the extension of new files and copy them to a directory based on internal rules. This is a simple string slice, e.g., filename[-3:] for 3-character extensions.
Remove moved files from your set of last results. Use os.rename() or shutil.move().
Sleep until next execution is scheduled.
| 1.2 | true | 1 | 5,816 |
2018-11-15 02:12:27.683
|
How do I configure settings for my Python Flask app on GoDaddy
|
This app is working fine on heroku but how do i configure it on godaddy using custom domain.
When i navigate to custom domain, it redirects to mcc.godaddy.com.
What all settings need to be changed.
|
The solution is to add a correct CNAME record and wait till the value you entered has propagated.
Go to DNS management and make following changes:
In the 'Host' field enter 'www' and in 'Points to' field add 'yourappname.herokuapp.com'
| 0 | false | 1 | 5,817 |
2018-11-15 03:51:30.570
|
Compare stock indices of different sizes Python
|
I am using Python to try and do some macroeconomic analysis of different stock markets. I was wondering about how to properly compare indices of varying sizes. For instance, the Dow Jones is around 25,000 on the y-axis, while the Russel 2000 is only around 1,500. I know that the website tradingview makes it possible to compare these two in their online charter. What it does is shrink/enlarge a background chart so that it matches the other on a new y-axis. Is there some statistical method where I can do this same thing in Python?
|
I know that the website tradingview makes it possible to compare these two in their online charter. What it does is shrink/enlarge a background chart so that it matches the other on a new y-axis.
These websites rescale them by fixing the initial starting points for both indices at, say, 100. I.e. if Dow is 25000 points and S&P is 2500, then Dow is divided by 250 to get to 100 initially and S&P by 25. Then you have two indices that start at 100 and you then can compare them side by side.
The other method (works good only if you have two series) - is to set y-axis on the right hand side for one series, and on the left hand side for the other one.
| 1.2 | true | 1 | 5,818 |
2018-11-15 06:53:57.707
|
How to convert 2D matrix to 3D tensor without blending corresponding entries?
|
I have data with the shape of (3000, 4), the features are (product, store, week, quantity). Quantity is the target.
So I want to reconstruct this matrix to a tensor, without blending the corresponding quantities.
For example, if there are 30 product, 20 stores and 5 weeks, the shape of the tensor should be (5, 20, 30), with the corresponding quantity. Because there won't be an entry like (store A, product X, week 3) twice in entire data, so every store x product x week pair should have one corresponding quantity.
Any suggestions about how to achieve this, or there is any logical error? Thanks.
|
You can first go through each of your first three columns and count the number of different products, stores and weeks that you have. This will give you the shape of your new array, which you can create using numpy. Importantly now, you need to create a conversion matrix for each category. For example, if product is 'XXX', then you want to know to which row of the first dimension (as product is the first dimension of your array) 'XXX' corresponds; same idea for store and week. Once you have all of this, you can simply iterate through all lines of your existing array and assign the value of quantity to the correct location inside your new array based on the indices stored in your conversion matrices for each value of product, store and week. As you said, it makes sense because there is a one-to-one correspondence.
| 0 | false | 1 | 5,819 |
2018-11-15 11:02:06.533
|
Installing packages to Anaconda Environments
|
I've been having an issue with Anaconda, on two separate Windows machines.
I've downloaded and installed Anaconda. I know the commands, how to install libraries, I've even installed tensorflow-gpu (which works). I also use Jupyter notebook and I'm quite familiar with it by this point.
The issue:
For some reason, when I create new environments and install libraries to that environment... it ALWAYS installs them to (base). Whenever I try to run code in a jupyter notebook that is located in an environment other than (base), it can't find any of the libraries I need... because it's installing them to (base) by default.
I always ensure that I've activated the correct environment before installing any libraries. But it doesn't seem to make a difference.
Can anyone help me with this... am I doing something wrong?
|
Kind of fixed my problem. It is to do with launching Jupyter notebook.
After switching environment via command prompt... the command 'jupyter notebook' runs jupyter notebook via the default python environment, regardless.
However, if I switch environments via anaconda navigator and launch jupyter notebook from there, it works perfectly.
Maybe I'm missing a command via the prompt?
| 1.2 | true | 1 | 5,820 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.